Sign in with
Sign up | Sign in
Your question

AMD makes bold quad-core claims(ZDnet)

Last response: in CPUs
Share
January 25, 2007 1:42:21 AM

I think there's 2 threads on this already. I think.

:) 

One is here:
Click me
January 25, 2007 1:54:42 AM

jack we get along so great

i am giddy like a naked school girl that u enjoy my threads
Related resources
January 25, 2007 1:56:45 AM

Quote:
jack we get along so great

i am giddy like a naked school girl that u enjoy my threads


Take a lesson

same topic, two different posts, two different reactions
January 25, 2007 2:18:34 AM

turpit... your problem is you want the glory and the respect jack gets

but u lack his substance

dont shoot the messenger
January 25, 2007 2:27:26 AM

Quote:
turpit... your problem is you want the glory and the respect jack gets

but u lack his substance

dont shoot the messenger

?
January 25, 2007 2:29:40 AM

Quote:
I think there's 2 threads on this already. I think.

:) 

One is here:
Click me


Yeah, but Wes' thread is not aggrevating, thought out, and can produce more 'civil' converstatin ;) 

Oh, I linked m25's thread, not lordpope's.

I ain't crazy. :D 
January 25, 2007 2:44:56 AM

Quote:
Just thought here was a little interesting info in this article. Any thoughts?
Avoid flame wars please.

http://news.zdnet.co.uk/hardware/0,1000000091,39285591-...

wes


I like the way you put it better than Lordpope's thread, he was asking to get flamed.... :)  ....

Two thoughts here --- Barcelona will likely be great in server but the desktop will not fair as well -- probably parity in desktop or very slightly behind. My take anyway.

Second, I would be a little skeptical to translate this to broad applications, the wordsmithing here is a bit ambiguous and, likely, purposefully vague.

Jack

I don't necessarily think that this should be taken with a grain of salt. I mean, to make claims like, "This is going to be the biggest enhancement to the x86 architecture since we released Opteron. It will take the power-per-watt equation to levels people have never seen before." and then not perform to those standards is worse than not having a competitive product.
It seems like Barcelona was created for server, and if we get a cpu initially meant to make a horse of a server, then I cannot complain. I look forward to a Conroe killer, that would just be ironic, don't ya think? It would be like a cruel joke. I'm not a fanboy, I just think if amd stole back the crown after having it for 3 years and losing it for less than one, that would be funny in a nerdy sort of way.
And if it kills a Conroe and uses a max of 65w there's no stopping it. I look forward to it. Anyone else? Admittedly I am a bit biased because I have AMd products, but I also have Intel products, and I don't think one company is better than the other, but I kind of want Amd's solution to be a winner.
January 25, 2007 2:45:47 AM

"Customers don't care whether chips are monolithic or combine separate processors, Allen said, but they do care about performance. "We came to the conclusion that, given the capabilities and performance with the monolithic design, it was clearly the right answer," Allen said."

Jack,

The problem I see with the monolithic quad core is going to be the cost. The bigger you make the silicon the more it is going to cost. They will probably achieve the performance target "40%" "across a wide variety of workloads" but at what cost? The Intel quad core servers may still be the better value by then they will probably be into round one of price cuts. I didn't get anything from that on desktops though so how do you say the dekstop will not fair as well when they haven't given a solution for the desktops. I would have to believe they are working on a desktop solution to the Core2Duo's but I don't see it in this article.
January 25, 2007 2:47:33 AM

Quote:
turpit... your problem is you want the glory and the respect jack gets

but u lack his substance

dont shoot the messenger


Here we go, again :roll:
January 25, 2007 3:27:24 AM

Quote:
turpit... your problem is you want the glory and the respect jack gets

but u lack his substance

dont shoot the messenger


LMAO.

I dont have his "substance", I never have and I never will. :roll:
You should go back and read aways. Search "turpit" and read how many times Ive made mistakes, how many times Ive posted "what a ($#@#@) I am" and " :oops:  ", and how many times Ive had to say "sorry". While your doing that, you should also note that when the BS levels are low, I dont post much. So much for the glory theory.

But you should also read how many times Ive said I despise people who misrepresent the truth for their own purposes, and how I despise both the horde and the intelliots.

Sorry friend...your barking up the wrong tree, but very nice try. Go ahead and feel free to keep phishing.
January 25, 2007 3:49:04 AM

Please, do not carry rants from one thread to another, it hurt's.Plus it is disrespectful to the OP. What was disagreed on one thread should stay in that thread. :wink:
January 25, 2007 3:56:06 AM

Quote:


AMD can't glue two die together

Jack

Source???
January 25, 2007 4:17:22 AM

I'm beginning to wonder if Fusion isn't just a way for AMD to take IGP market share away from Intel. And that's it.

I seriously doubt that any iteration of Fusion will be able to compete with a video card from nVidia or (AMD's own) ATI. So if the aim isn't to outperform, what is the aim? Money? I must admit, I'm not sure what AMD's grand scheme is here. I can't see how combining a GPU and CPU will make anyone any more money.

For 90% of users, the IGP is good enough. Who exactly are they targeting?

Any input would be welcomed.
January 25, 2007 4:20:53 AM

I did a quick scan but I wasn't going to waste time on it since I don't find this info to be all that informative. Even before Core 2 came to market, all the claims I didn't really listen to because the companies tend to exacerbate the claims. I feel the same way about this one, until I see actual benches I don't care what they say. I do find it interesting, but I wouldn't be willing to invest emotion or money on it.

wes
January 25, 2007 4:26:51 AM

Quote:
I'm beginning to wonder if Fusion isn't just a way for AMD to take IGP market share away from Intel. And that's it.

I seriously doubt that any iteration of Fusion will be able to compete with a video card from nVidia or (AMD's own) ATI. So if the aim isn't to outperform, what is the aim? Money? I must admit, I'm not sure what AMD's grand scheme is here. I can't see how combining a GPU and CPU will make anyone any more money.

For 90% of users, the IGP is good enough. Who exactly are they targeting?

Any input would be welcomed.


No, I personally don't think ATI was acquired just for IGP or chipset .... Fusion has much more utility than just doing raw graphics, a well tuned GPU core is also a very efficient and parallel computations on simple FPU code..... if a GPU makes it into the CPU (and likely before) you will see generic non-graphics code running through the GPU at some point.

Also, the industry is heading toward this in some fashion I am convinced... it is just a matter of time. AMD has talked about it before fusion (modular, stamp it out designing), Intel has published papers about it, and IBM has built one and it is currently being used in the PS3 :)  .... look for 'application' specific CPUs and less and less emphasis on general purpose CPUs. There will be multicore chips with cores and cores of video optimized cores for video encoding, multimedia, etc. or scores and scores of graphics oriented cores for 3D simulations, etc. etc. AMD is simply working to positioning themselves for the future --- Intel can afford and has the resources to build from within, on the timescale needed I don't think AMD had any other choice --- but honestly, I have not thought it completely through, the ATI acquisition was a bit of a surprise to me so my 'thought' process on this has really only just begun.... hence, my opinions are subject to change as more data comes in :)  ....

Jack
Amds move to 64-bit looking us in the mirror, except a few years older?
January 25, 2007 4:26:55 AM

No flame war here,but I think we should start supporting AMD because if we don't,we're likely to see INTEL monopolizing the market.This is something that can not happen.If it does,the average enthusiast or even casual user,will either not be able to afford a new processor,or have to settle for a lowend cpu.I like being able to buy a mid-high end cpu.But if INTEL manages to monopolize the industry,that's exactly what will happen.

Dahak

AMD X2-4400+@2.6 S-939
EVGA NF4 SLI MB
2X EVGA 7800GT IN SLI
2X1GIG DDR IN DC MODE
WD300GIG HD
EXTREME 19IN.MONITOR 1280X1024
THERMALTAKE TOUGHPOWER 850WATT PSU
COOLERMASTER MINI R120
January 25, 2007 4:27:53 AM

Turpit and Pope,

CUT IT OUT. I didn't realize the other threads with the same link were in existence. If I had known, I would not have bothered with starting a new one. As Everret said, leave those arguements in that thread please, try not to ruin this one as some of the other memebers(who will remain unnamed, you guys know who I am talking about)always do. That's it, thought it was interesting info, wanted opinions, not wars.

Thanks
wes

Edit: also, I didn't notice the time zone on the article, I was thinking local time, not Zulu time. So, I thought it popped up not to long before I made the thread.
January 25, 2007 4:29:40 AM

Quote:
I'm beginning to wonder if Fusion isn't just a way for AMD to take IGP market share away from Intel. And that's it.

I seriously doubt that any iteration of Fusion will be able to compete with a video card from nVidia or (AMD's own) ATI. So if the aim isn't to outperform, what is the aim? Money? I must admit, I'm not sure what AMD's grand scheme is here. I can't see how combining a GPU and CPU will make anyone any more money.

For 90% of users, the IGP is good enough. Who exactly are they targeting?

Any input would be welcomed.


No, I personally don't think ATI was acquired just for IGP or chipset .... Fusion has much more utility than just doing raw graphics, a well tuned GPU core is also a very efficient and parallel computations on simple FPU code..... if a GPU makes it into the CPU (and likely before) you will see generic non-graphics code running through the GPU at some point.

Also, the industry is heading toward this in some fashion I am convinced... it is just a matter of time. AMD has talked about it before fusion (modular, stamp it out designing), Intel has published papers about it, and IBM has built one and it is currently being used in the PS3 :)  .... look for 'application' specific CPUs and less and less emphasis on general purpose CPUs. There will be multicore chips with cores and cores of video optimized cores for video encoding, multimedia, etc. or scores and scores of graphics oriented cores for 3D simulations, etc. etc. AMD is simply working to positioning themselves for the future --- Intel can afford and has the resources to build from within, on the timescale needed I don't think AMD had any other choice --- but honestly, I have not thought it completely through, the ATI acquisition was a bit of a surprise to me so my 'thought' process on this has really only just begun.... hence, my opinions are subject to change as more data comes in :)  ....

Jack

Ahhh.... see now I get it. I misunderstood what Fusion was all about. And yep, it makes sense now that you explain it.
January 25, 2007 4:40:21 AM

Jack,

What if the socket was designed with an extra female pin, and for those cpu's that didn't need it, just don't put the pin. So, what IF they had the socket to fit it in, are you saying it still would run correctly(sub par performance). Not saying I disagree, just curious about this subject.

wes
January 25, 2007 4:41:37 AM

Intel did it. And what a fine microprocessor we got... the Pentium D.

hehehehe...
January 25, 2007 4:58:35 AM

I understand what you are saying. I wonder though, why couldn't the on die IMC be used just like the IMC on the mobo which intel uses? Nm, I think I just answered my own question. Each additional core has to be hard linked to it, so, you can't just paste another dual core on without having the additional pins for the additional IMC to communicate with the memory. Well, that sucks for AMD. While, the monolithic design might have advantages, it definately has it disadvantages. So, the only way would be to have multiple sockets(confusion, mobo makers would hate it) or, have a one size fits all type of deal with a ton of pins which the low end cpu's wouldn't even use. This would drive the cost of the board up, and those who want to build low end machines would hate it. The IMC is putting them in a bit of a pickle.

wes

Edit: The IMC would have to be designed to be modular, so, I don't think it would be possible to make it part of the core if it were modular(using modular to describe plugging extra cores into it like Intel). If it were part of the core, then every core would have this modular IMC. It doesn't sound like it is possible without having a massive socket. Otherwise, if they were using the same pins, one core would have to supply the other core with the desired data, or, it would swap back and forth or slave like you said.
January 25, 2007 5:01:51 AM

i'm just fcukin with ya
January 25, 2007 5:22:57 AM

Quote:
No flame war here,but I think we should start supporting AMD because if we don't,we're likely to see INTEL monopolizing the market.This is something that can not happen.If it does,the average enthusiast or even casual user,will either not be able to afford a new processor,or have to settle for a lowend cpu.I like being able to buy a mid-high end cpu.But if INTEL manages to monopolize the industry,that's exactly what will happen.


Unfortunately, this is diametrically opposed to rewarding producers for superior products with superior pricing. In other words, if Intel doesn't make money for having a clearly superior product and extremely competetive pricing (from a consumers viewpoint), they will stop innovating, because it costs a lot of R&D time and money to design a 500-600 million transistor CPU. And conversely, if AMD is rewarded for having an inferior product, what incentive will they have to improve either? The capitlistic system, such as it is, is supposed to reward those who offer the best products and features at the best price points.

Right now, Intel has the better product and pricing in the mid to high end CPU market. AMD is clearly the leader in the low end. If AMD really wants more of the mid-to-high end CPU market, then they need to either come out with a clearly better product with compareable pricing, or offer what products they do have at a more advantagous price point.

As I see it, there are no compelling evidence to sway me toward buying an AMD CPU right now, unless you're already invested in an AM2 system.
January 25, 2007 5:24:57 AM

Couldn't AMD disable the dual channel mode of each IMC and just run each
one in single channel with out the major hit of slaving one core to the other?
True we would be back to the pre dimm days when 32 pin simms had to be
installed in pairs.
January 25, 2007 5:28:46 AM

True it is good to support superior technology but with out AMD Intel would still be making PII's and charging $1000 for 500mhz.
January 25, 2007 5:54:26 AM

How should we get the data from the HD?
January 25, 2007 5:56:13 AM

Quote:
I'm beginning to wonder if Fusion isn't just a way for AMD to take IGP market share away from Intel. And that's it.

I seriously doubt that any iteration of Fusion will be able to compete with a video card from nVidia or (AMD's own) ATI. So if the aim isn't to outperform, what is the aim? Money? I must admit, I'm not sure what AMD's grand scheme is here. I can't see how combining a GPU and CPU will make anyone any more money.

For 90% of users, the IGP is good enough. Who exactly are they targeting?

Any input would be welcomed.
I may be way off base here but I believe current igp's cant run all the bells and whistles on vista. Maybe this is why?
January 25, 2007 5:59:35 AM

I have run Vista Ultimate RC2 on an nVidia 6150 igp with 256 meg shared
memory and had no problems, with all the fancy visuals enabled.
January 25, 2007 6:04:35 AM

Forgive me, I just couldn't help myself when beerandcandy mentioned fsb.
January 25, 2007 7:18:50 AM

I've been wondering for years (at least 5 or 6) as to when the manufacturers would smarten up to using larger caches on-die with the CPU. Basically, what they really need to do is provide sub-divided cache sections (say 1MB each. and 16MB total)) that allow the CPU to load executable and lists in to different cache 'chunks'. This would allow even more efficent thorough-put from the CPU since it could concetrate more on actual execution, and less on waiting for data to be loaded from off-die.

Basically the idea is you want to maximise the amount and flow of data through the L2 cache, since any calls to RAM have an increased clock penalty, especially as FSB and CPU cycles rise. If a standardized large cache system was implemented, this would allow software designers (and hardware engineers) to be able to more efficently code for optimised system through-put. Since you have a 'minimum' number of L2 'chunks' to deal with, you're able to set up memory calls and code execution much more effectively. You'd even be able to better allocate CPU priority between multiple programs, since a smaller number of 'chunks' would be used to hold 'mini-programs' for background tasks.
January 25, 2007 7:34:04 AM

Jack, I think that "glueing" two K8 cores is possible. It will require two RAM modules and it will perform same as QuadFX.
Connect two dies via cHT link and one DDR/DDR2 channel to each die. One of the dies will be connected to the northbridge chipset. Same as the two CPUs & northbridge are connected on the QFX mainboard.
January 25, 2007 8:12:02 AM

I suspect that AMD could design thier quads to be made on two seperate die. They could not be run independantly (unless bothe had ODMCs that could run with two or four cores), but it would effectively take care of a "yield problem" They could even build them with a seperate ODMC.
They have been working on the modular concept lately.
January 25, 2007 8:49:01 AM

Quote:
CSI


CSI will not be here until 2008 or later.
January 25, 2007 9:56:16 AM

Quote:
... You'd even be able to better allocate CPU priority between multiple programs, since a smaller number of 'chunks' would be used to hold 'mini-programs' for background tasks.


On my work rig, I run VS 2005 wich eats up around 500 Mb, Sql Server wich also uses up to 500, pocket pc emulator & utilities + 200, various google , yahoo, skype bars applications, browser some system services. + 150 at least, the erp application we are developing + 400 Mb and my 2Gb of ram aren't quite enough, add sometimes photosop to the mix, kamtasia and other variouse ones ... and I aint't even running Vista even if I have it since december from MSDN ... Vista would require at least 1 Gb more to work the same ...

... over 30 Gb of instaled application alone, no media games or anything else ...

... my point is that the days of small applications have passed a long time ago ...
a c 96 à CPUs
January 25, 2007 1:28:10 PM

Eagle, AMD has to make monolithic dies because the features that they put in the chip require it. Intel's CPUs don't have these features and thus can be made as two different dies. AMD uses an integrated memory controller on-die and this requires that all cores in the CPU be on the same die. Intel has the memory controller off-die in the northbridge feeding the CPU via an FSB. Intel can use that same FSB to stitch together the two cores in their multi-chip modules (MCMs.) AMD's quad-core also has a shared L3 cache between all 4 cores, and this also requires that all the cores be on the same die. Intel's Core Duo and Core 2 Duo CPUs share the L2 cache and thus have to be on one piece of silicon as well. One other reason that AMD's Barcelona CPU has to be on one piece of silicon is that each core has independent power profiles; e.g. you can have one core that's loaded running at full GHz as the other 3 idle at 1.0 GHz. AFAIK, this cannot be done when the chips are not on the same die.
a c 96 à CPUs
January 25, 2007 1:37:46 PM

I think they will decrease their cache, at least on some CPUs with IMCs. If they don't need it, why put it there? It just decreases processors/wafer and percentage yields.

However, I think that Intel will still make a line of CPUs- the EEs or Xeons- that do have large caches just because they can sell them at a premium, even if it doesn't make performance much better.
January 25, 2007 1:42:12 PM

Quote:
I think they will decrease their cache, at least on some CPUs with IMCs. If they don't need it, why put it there? It just decreases processors/wafer and percentage yields.

However, I think that Intel will still make a line of CPUs- the EEs or Xeons- that do have large caches just because they can sell them at a premium, even if it doesn't make performance much better.


AMD will be making dual-core processors with L3 (Kuma) and without L3 (Rana). We don't know how much performance is contributed to L3 cache now, but surely we can see it later. :wink:
January 25, 2007 2:11:38 PM

Quote:
I've been wondering for years (at least 5 or 6) as to when the manufacturers would smarten up to using larger caches on-die with the CPU. Basically, what they really need to do is provide sub-divided cache sections (say 1MB each. and 16MB total)) that allow the CPU to load executable and lists in to different cache 'chunks'. This would allow even more efficent thorough-put from the CPU since it could concetrate more on actual execution, and less on waiting for data to be loaded from off-die.

Basically the idea is you want to maximise the amount and flow of data through the L2 cache, since any calls to RAM have an increased clock penalty, especially as FSB and CPU cycles rise. If a standardized large cache system was implemented, this would allow software designers (and hardware engineers) to be able to more efficently code for optimised system through-put. Since you have a 'minimum' number of L2 'chunks' to deal with, you're able to set up memory calls and code execution much more effectively. You'd even be able to better allocate CPU priority between multiple programs, since a smaller number of 'chunks' would be used to hold 'mini-programs' for background tasks.


One word: Cost. Cache already takes up about half the die space and SRAM cells takes 6 transistors each. 1T-SRAM cells are slower. Z-RAM or TTRAM cells will get there for AMD eventually (and cost will drop significantly), but until then, expect them to stick to the smaller cache, low latency path.

I expect we'll see a lot of Barcelona dual core chips out there when this is done due to binning. Tri-core chips? who knows...it might happen.
January 26, 2007 8:41:22 AM

Quote:
However, I think that Intel will still make a line of CPUs- the EEs or Xeons- that do have large caches just because they can sell them at a premium, even if it doesn't make performance much better.


That's BS. In PERSONAL COMPUTER workloads, caches bring small performance increase. In servers, bigger problems arise. Some applications need large amounts of RAM(hence the support for PAE on the previous Xeons even though it degraded performance at same capacity) for example. Caches also bring scability. Having 8/16/32/64 or more processors brings poor scability. It says adding large amounts of L3 on the K8L will bring good things on the scability side for high CPU count.

On PC, adding a slower L3 cache might rather degrade performance.

The type of workloads are also different. Intel's 5000X chipset and FB-DIMM may be slower for US(PC users) that need burst bandwidth, but sustained bandwidth is more important for the market 5000X chipset sells at.
January 26, 2007 8:58:00 AM

The problem with these computer forums is they look at benchmarks for Personal Computer users but talk technologies relevant to the workstation/server side.

HTT(and CSI) is irrevelant to PC users. Don't some of you guys remember the claim integrated memory controller brought 20% performance alone?? Most of the performance increase that K8 had over K7 was brought by SSE2 support and IMC.

HTT for PC brings faster connections for SATA/USB(I/O devices)/Fire devices to share. Great. Those devices can't even saturate their own interface(SATA can't take advantage of SATA150).

HTT would have been relevant to PC users if it was the bus used for CPU to memory communication. But it is not. The integrated memory controller is. The IMC is connected to the core using a crossbar, and the IMC is directly connected to the memory.
January 26, 2007 9:37:26 AM

Quote:
For 90% of users, the IGP is good enough. Who exactly are they targeting?

1.3 billion Chinese who want cheap machines?
January 26, 2007 10:48:26 AM

Quote:
For 90% of users, the IGP is good enough. Who exactly are they targeting?

1.3 billion Chinese who want cheap machines?

I don't think so :wink:
January 26, 2007 12:15:07 PM

Quote:
The problem with these computer forums is they look at benchmarks for Personal Computer users but talk technologies relevant to the workstation/server side.

HTT(and CSI) is irrevelant to PC users. Don't some of you guys remember the claim integrated memory controller brought 20% performance alone?? Most of the performance increase that K8 had over K7 was brought by SSE2 support and IMC.

HTT for PC brings faster connections for SATA/USB(I/O devices)/Fire devices to share. Great. Those devices can't even saturate their own interface(SATA can't take advantage of SATA150).

HTT would have been relevant to PC users if it was the bus used for CPU to memory communication. But it is not. The integrated memory controller is. The IMC is connected to the core using a crossbar, and the IMC is directly connected to the memory.


I'm not much of a tech geek, but I scan the forum daily to try to get up to speed on what I don't know. DavidC1's comment, from a novice's point of view, seems to be a bullseye. Often, it seems that posters mix apples and oranges (or maybe it should be 'mixes ddr and ddr2').
The crosstalk is particularly confusing for noobs like me, who don't know enough to identify subtle misdirection/misapplication of a correct statement.
A call goes out to the 'smart' people to keep every contributor on-topic, relevant, and willing to relook their own individual statements of fact and opinion.....(except for the severely metagcognitively challenged :D  )

Finally...How correct is the HTT/IMC point...as related to PC users?
January 27, 2007 3:40:14 AM

Quote:
Avoid flame wars please.


Here?? On the THG?? Thats asking the impossible of little nerd boys isnt it?

I am kind of new here but from what Ive read you just invited BaronMatrix to join in?
January 27, 2007 3:48:21 AM

There are a number of worthwile real life engineers 30+ on here I am fairly new but JumpinJack could probably point them out for you. Most of the college kids are pretty easy to spot after a few posts but some of them are also fairly good too. Read through the thread on who are we and you may be able to identify some forum members to follow.
January 27, 2007 3:54:16 AM

I personally think Intel will keep its crown for at least another fiscal quarter even after the Barcelona comes out..
January 27, 2007 4:12:11 AM

Quote:
I personally think Intel will keep its crown for at least another fiscal quarter even after the Barcelona comes out..

Yeah, of course because the market is not moving to new technology fast :wink:
January 27, 2007 12:37:12 PM

I just found another article with a little more info on the upcoming AMD cpu's. I figured, since this thread is healthy, rather than starting a new one, I will post the link in here. I imagine a number of you might have seen this already, but, I will post anyway.

http://www.dailytech.com/More+AMD+Next+Generation+Deskt...

I also found this article on Anand about the upcoming 45nm Intel parts, I know this might not be the right place, but the thread atmosphere is stable, so posting it also.

http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=291...

Jack, anything interesting to you on the above links? You seem to normally pic small details out that might be mundane to the untrained eye and show why it is more important than it looks and so on.

wes

Edit: typo... also, if you guys think the Intel link is thread worthy(IE. new info not yet seen) and don't think it is suitable to add it to this thread, by all means, make a new one and I will edit this post to remove it. I am going to sleep and don't care to start another thread for fear I missed that it has already been started with this link.
January 27, 2007 9:45:14 PM

Well,

It's quite obvious I should have posted both of these in new threads, even though the links were posted in this thread well before the new threads based on the links were started. Next time, I will just start a new thread. Oh well.

wes
!