Sign in with
Sign up | Sign in
Your question

Bulldozer more a server cpu than a desktop?

Tags:
  • CPUs
  • Desktops
  • Servers
  • Product
Last response: in CPUs
Share
April 6, 2011 8:33:46 AM

IDK if it's been asked before, but im getting a lil worried with what I've been reading. Alot of info i've been reading on BD points to it being more a server side cpu than a desktop, which fails :p 

"Actually, 90% of the execution happening inside a server is integer, not floating point, so having more integer capability is more important for the vast majority of the workloads."

quoted from John. Thats just one example out of many, and as soon as something is asked about being more a server side, his quick to change and say it will be good for desktops / gaming. What is your guys opinion on it?

Please no fanboy rants here, im just lookin for info / opinions on. Im not very knowledgeable with CPU's arch and stuff

More about : bulldozer server cpu desktop

April 6, 2011 9:28:12 AM

From what Iv'e read, Bulldozer has been used in server CPU's (and BD really IS a server CPU) since some time in 2009-2010, they're code-named magny corus.

A 12-core Magny corus CPU at 2.2GHz almost reaches the same benchmark score as a 980X at stock speed.

Since BD will be released at first with their highest end version being an 8-core at high clocks (3.2GHz+, anything lower is a step backwards) you can pretty much guarantee It'll at least be on-par with a last-gen i7, similar to SB and a great all-round architecture.
April 6, 2011 9:40:22 AM

No, BD is not Magny Cours.

From what I see, the reason why they focus on integer so much is because they believe floating point calculations will bound to be handled by GPU. Just look at the trend. Photoshop going CUDA, Flash and HTML 5 hardware acceleration, brute force hacking via CUDA/Ati Stream and so on. The first batch of BD will not have an integrated GPU. However, according to some of the articles posted on AMD blogs, it seems highly likely that the second batch of BD will come with graphics integrated, thereby solving the floating point bottleneck.

These are just speculations at best. Let's wait and see shall we?
Related resources
April 6, 2011 11:18:00 AM

BD is not magny cours. Bulldozer is not a "server processor", it is simply targeted at threaded environments.

Also, I would not say "floating point bottleneck", we will have higher FP performance on BD than our current products.
April 6, 2011 11:59:07 AM

should we expect server side prices then? i hope not :) 
April 6, 2011 5:52:04 PM

There are plenty of people that want more cores and more threads on the desktop side. Don't count that out.
April 7, 2011 2:26:20 AM

@JF - let me rephrase it. Sure it wouldn't be a real "bottleneck" compared to existing products that AMD offer, it may lose out to other competing products like Sandy Bridge. However, again, these are just speculations.

I agree with JF here. You may think that you do not need 8 cores now. But think about a year later or maybe 2. When dual core came out, many said they would not need 2. But look at the trend now. Look at the latest games. Look at multi-gpu scaling. Faster processors along with more cores tend to allow multi-cards to scale better. That is only one of the advantages. This is the enthusiast / gamer / tech savvy market.

Performance isn't the only factor you should consider now. Think about power efficiency and value. Think about price. I don't give a crap even if BD is slower than SB. I need not the bragging rights. I am a developer and I just want a processor that handle things well enough at the right price.

I am sure many would do the same. Not everyone can afford to hold such bragging rights, especially in third world countries.
a c 105 à CPUs
April 8, 2011 9:21:24 PM

HighPies said:
IDK if it's been asked before, but im getting a lil worried with what I've been reading. Alot of info i've been reading on BD points to it being more a server side cpu than a desktop, which fails :p 


I hate to break it to you, but most of the desktop CPUs we currently run are based on server CPUs and use a lot of server CPU technology. NetBurst and Atom are the only new CPU designs out of Intel since 1995 that aren't based on a server chip. The Itanic never made it into the desktop, and everything else is based on 1995's Pentium Pro server CPU. Given the crappy performance of NetBurst and the Atom, it's probably better that Intel stick to using server tech for desktop CPUs since everything else has been pretty mediocre to downright crappy. AMD has very obviously designed the K8 and K10 for server usage, to the point that they even released the server CPUs before the desktop-branded ones. Also, if you use a 64-bit OS, enjoy the performance advantages of an IMC over the old FSB, or use any virtualization technology at all (such as Windows XP Mode on Windows 7), that's server technology.

A CPU being designed for a server does not necessarily "fail" in the desktop. Look at the first Athlon 64 FXes and the Core i7 900 series. Those are server CPUs rebranded for desktop and even use the same sockets, whereas the "desktop" variants (Socket 754 A64s and LGA1156 Core i-series) were generally less desirable to enthusiasts than the server-based versions.

HighPies said:
should we expect server side prices then? i hope not :) 


Probably, since a good chunk of AMD's server line is priced below the $300 the Phenom II X6 1100T goes for. :D  I know, not the answer you were looking for. I do doubt they'll charge the same kinds of prices on desktop Bulldozers that Intel charges for six-core LGA1366 Xeons or any of the LGA1567 Xeons. My guess is that the 2nd-to-the-top Bulldozer CPU is no more than $500 even if it is considerably faster than the i7-990X or any of the Sandy Bridges out at the time. I'd even guess more like $300-350, since even Intel has only a couple of CPUs above that line. The market has gotten used to CPUs being relatively inexpensive and I doubt that anything is going to reverse that trend and cause people to routinely shell out $500+ for CPUs like they did 10-15 years ago.
April 8, 2011 11:20:54 PM

HighPies said:
IDK if it's been asked before, but im getting a lil worried with what I've been reading. Alot of info i've been reading on BD points to it being more a server side cpu than a desktop, which fails :p 

"Actually, 90% of the execution happening inside a server is integer, not floating point, so having more integer capability is more important for the vast majority of the workloads."

quoted from John. Thats just one example out of many, and as soon as something is asked about being more a server side, his quick to change and say it will be good for desktops / gaming. What is your guys opinion on it?

Please no fanboy rants here, im just lookin for info / opinions on. Im not very knowledgeable with CPU's arch and stuff



Mate the first i7's were server chips and nobody would call nehalem a failure.
April 11, 2011 5:14:30 AM

Bulldozer vs SB/IB will be like the i5-760 vs one of the X6's from AMD.

So do you want better IPC or do you use stuff that is highly threaded.

I think that SB/IB would be clearly a better all round processor for probably 95% of desktop users, thus AMD are going to have to rely very heavily on marketing being able to convince people that the world is going to become insanely highly threaded, or they will have to price bomb when Ivy Bridge comes out.
April 11, 2011 6:53:01 AM

Quote:
^ actually I disagree, I think a AMD Athlon II x2 would be enough for 90% of desktop users

You are disagreeing with something I never said.

I didn't state what was a sensible minimum spec for people.
April 11, 2011 2:42:15 PM

^Greg you sure turned into an intel fanboy after buying that i5-2500.I donot blame you, intel products are so good that it can turn anyone there loyalist.I know this is the second time i am telling you this but i just can't believe your transformation.
a b à CPUs
April 11, 2011 4:52:36 PM

Quote:
From what I see, the reason why they focus on integer so much is because they believe floating point calculations will bound to be handled by GPU


True, but in games, in particular, PHYSICS, you have massive FP computations going on, which are starting to get more and more complex. And those are still on the CPU, which would make BD's one FP unit per BD module [2 cores] a VERY significant bottleneck.

I have serious concerns about BD's FP performance, and doubt that BD will even be as good as the first gen i7's, let alone Sandy Bridge in FP heavy situations [games, etc]. [Note: I was also the same guy on these forums that was worried about the 5000 series Tesselation performance. Just reminding everyone...]
April 11, 2011 9:24:37 PM

gamerk316 said:
Quote:
From what I see, the reason why they focus on integer so much is because they believe floating point calculations will bound to be handled by GPU


True, but in games, in particular, PHYSICS, you have massive FP computations going on, which are starting to get more and more complex. And those are still on the CPU, which would make BD's one FP unit per BD module [2 cores] a VERY significant bottleneck.

I have serious concerns about BD's FP performance, and doubt that BD will even be as good as the first gen i7's, let alone Sandy Bridge in FP heavy situations [games, etc]. [Note: I was also the same guy on these forums that was worried about the 5000 series Tesselation performance. Just reminding everyone...]



When did Thuban get more than FPU per core? Have you looked at the architectural things to come to your conclusion? AMD is still better at FP than Intel, but benchmarks don't show that unless you look at SPECFP or SPECINT.

I've studied the architecture and the PreFetch alone should catch up to Intel. And that's not to mention the DUAL FP (1 core can schedule both units each cycle). The additional issue will also make a difference in desktop apps. BD is 4 issue, Thuban is 3 issue. That's a theoretical 33% - say a little more than half with optimization.

And as anyone can see, games are GPU bound so that an X4 435 can play the same games as an 15 2500. Check out TomZ GPU review. As far as Physics AMD uses Bullet Physics based on OpenCL so the GPU can do some of the work. Even with L4D which uses Havoc CPU Physics, the rag doll effects are fine on my X4 940 and 4870.
So basically you should look at the technical spec rather than listening to rumors and innuendo.
a b à CPUs
April 12, 2011 12:27:47 PM

BaronMatrix said:
As far as Physics AMD uses Bullet Physics based on OpenCL so the GPU can do some of the work. Even with L4D which uses Havoc CPU Physics, the rag doll effects are fine on my X4 940 and 4870.


Physics are still CPU side, especially Havok. OpenCL ALLOWS for the CREATION of a GPU based Physics API, but none have been created yet, and I don't know of any major games that makes significant use of OpenCL for calculating physics effects on the GPU. [If its not used, you get no benifit. Simple concept really...]

Quote:
AMD is still better at FP than Intel, but benchmarks don't show that unless you look at SPECFP or SPECINT.


Be VERY careful, as you instantly bring up memory access times when talking about benchmarks. As such, I view them as useless; I prefer real-world usage, which does NOT favor AMD.

Also, I worry a lot about the Windows scheduler and BD. What concerns me is how work is distributed among BD's cores. Since we have the on FP module per core [1 BD module], its very possible if the Windows scheduler views all cores as being equal [which it probably will], that one BD module could get overloaded with FP workloads. Kinda the same issue as Hyperthreading [a hyperthreaded core getting work that requires the physical cores resources in some stage], although Windows Vista/7 is designed with Hyperthreading in mind, to help allivate those issues...
April 12, 2011 4:00:26 PM

You should not worry about windows scheduler. The delta between running 1 thread on a module vs. 2 is very small. Hyperthreading is a huge hit (negative in some cases). While you might believe that one thread per module will give you the best performance this is not always the case.
a b à CPUs
April 12, 2011 6:50:03 PM

Quote:
The delta between running 1 thread on a module vs. 2 is very small.


My point was that because of the 1 FP module per physical core, there is a VERY real chance of the Windows Scheduler giving the two cores on a single BD module a very heavy FP workload, where having one FP unit per module [2 cores] becomes the system bottleneck. Essentially, you cut FP performance in half in that case, and thats significant. I'm very interested in how the cores are numbered within windows for that very reason. [For example, on a Core 2 quad, the two cores on a single die are 0/2 and 1/3, as opposed to 0/1 and 2/3].

The FP unit is just one of my concerns: I'm very worried about IPC, which is bad enough on AMD chips as it is; I do not think a deeper pipeline is going to help any in this department, and wouldn't be shocked if IPC actually gets slightly worse. As such, as most consumer software doesn't dynamically scale well yet, I'm worried BD's full benifits will never be fully realized.

If BD were as good as roumered, AMD would have been way out in front by releasing some benchmarks that show its computing power, ESPECIALLY during the P67 recall, which was a prime oppertunity to gain customers. AMD was silent, and people are back to buying P67 systems. That alone is cause for major concern; if BD were as good as promised, then AMD's marketing deparment should have been front and center during that fiasco. Never let an oppertunity go to waste, but thats exactly what AMD did during the P67 recall.

If BD beats even the 1st gen i7, I'll be surprised. Everything I see from AMD, from Fusion, to Lliaro, to BD, points to focusing on Servers and other integrated devices [smartphones, tablets, etc], areas where Intel is significantly weaker then the competition.
April 12, 2011 7:33:49 PM

gamerk316 said:
If BD were as good as roumered, AMD would have been way out in front by releasing some benchmarks that show its computing power, ESPECIALLY during the P67 recall, which was a prime oppertunity to gain customers.


Yeah yeah then people like you would be harking back to Barcelona. It wouldn't matter what AMD did you'd find a way to downplay it.

Quote:
AMD was silent, and people are back to buying P67 systems. That alone is cause for major concern; if BD were as good as promised, then AMD's marketing deparment should have been front and center during that fiasco. Never let an oppertunity go to waste, but thats exactly what AMD did during the P67 recall.


Maybe you should spend more time actually paying attention?

http://www.amd.com/us/press-releases/Pages/ready-willin...
a c 105 à CPUs
April 12, 2011 8:50:03 PM

gamerk316 said:
Quote:
The delta between running 1 thread on a module vs. 2 is very small.


My point was that because of the 1 FP module per physical core, there is a VERY real chance of the Windows Scheduler giving the two cores on a single BD module a very heavy FP workload, where having one FP unit per module [2 cores] becomes the system bottleneck.


The one FPU per module is able to perform either 2x128 bit or 1x256 bit operation per clock cycle. The only 256-bit operations right now are AVX operations. The rest are all 128 bit or less. AVX is brand-spanking-new set of SIMD instructions and will thus be rarely seen in actual shipping software for quite some time. It appears that AMD made the decision to optimize for the most likely usage case where a CPU will very rarely see AVX instructions in real (e.g. non-benchmark) programs and thus the FPU operating in 2x128-bit mode will perform just as well as two full 256-bit-capable FPUs, but take up less die space. Also, you are assuming that both cores would be ceaselessly hammering on the FPU with no pipeline stalls or branch mispredictions or anything else that could possibly result in one core not being able to call the FPU to execute an instruction every single clock cycle. That is very unlikely, else Intel's HyperThreading would give exactly zero benefit.

Quote:
Essentially, you cut FP performance in half in that case, and thats significant. I'm very interested in how the cores are numbered within windows for that very reason. [For example, on a Core 2 quad, the two cores on a single die are 0/2 and 1/3, as opposed to 0/1 and 2/3].


You very likely will NOT see FPU performance be down in many real applications due to the reasons I stated above.

I don't know how Windows numbers cores as I have never used a Windows machine with more than one CPU die. Linux numbers CPUs sequentially. starting with all threads on the first CPU die in the first CPU, then threads on the second die in the first CPU, and so forth. My old dual Gallatin Xeon file server had the first CPU's physical thread as CPU0 and its logical thread as CPU1, and the second CPU's physical thread was CPU2 and its logical thread was CPU3. The scheduler knew what threads were physical vs. logical and would thus schedule two threads on either CPU0+CPU2 or CPU1+CPU3 and never CPU0+CPU1 or CPU2+CPU3.

Quote:
The FP unit is just one of my concerns: I'm very worried about IPC, which is bad enough on AMD chips as it is; I do not think a deeper pipeline is going to help any in this department, and wouldn't be shocked if IPC actually gets slightly worse. As such, as most consumer software doesn't dynamically scale well yet, I'm worried BD's full benifits will never be fully realized.


If you are talking about multithreading with your "dynamically scale" comment, consumer software that uses much for CPU power generally is at least somewhat multithreaded and will only continue to be more multithreaded in the future. Like it or not, single-threaded performance scaling is not very good and will not see much of an increase in the future unless there is a change to some CPU substrate and manufacturing process that is massively higher-clocked than silicon CMOS ICs. Multithreading is basically the only way to have a meaningful increase in performance with our current technologies. There are limits to scaling of multithreading programs, but we are much farther away from those limits, otherwise supercomputers and GPGPUs would not be around. But even extremely-expensive machines don't have a much greater clock speed and single-threaded performance than current desktop CPUs. IBM's POWER7 is probably the best out there for that, but even those ~200 W must-be-water-cooled monstrosities are only 1 GHz or so faster than Core i7s and Phenom IIs, and with possibly a little higher IPC. I would say the biggest impediment to multithreading-related performance increases from consumer programs are the astonishingly crappy quality of consumer programs. It isn't always trivial to make good multithreaded programs, which is why lazy devs and poor-quality "if it manages to successfully compile, it ships" software vendors lament the "good old days" of not having to deal with thread management and let the massive increases in clock speed cover up for atrocious code.

Quote:
If BD were as good as roumered, AMD would have been way out in front by releasing some benchmarks that show its computing power, ESPECIALLY during the P67 recall, which was a prime oppertunity to gain customers. AMD was silent, and people are back to buying P67 systems. That alone is cause for major concern; if BD were as good as promised, then AMD's marketing deparment should have been front and center during that fiasco. Never let an oppertunity go to waste, but thats exactly what AMD did during the P67 recall.


If Bulldozer really is as good as rumored, who would want to buy any of the existing Stars-based Athlon II/Phenom II parts when they could wait a few months and get something massively faster? Intel shot themselves in the foot massively by releasing Core 2 benchmarks and hyping up the new CPU about six months before the actual launch date. It did hurt AMD, but it also rendered the existing P4s and Pentium Ds basically unsalable and Intel took it on the chin for quite some time. AMD doesn't want to tank sales of their existing CPUs before they are actually ready to ship Bulldozer, so they're quiet about Bulldozer. They saw what happened to Intel and don't particularly want to have that happen to them.

Quote:
If BD beats even the 1st gen i7, I'll be surprised. Everything I see from AMD, from Fusion, to Lliaro, to BD, points to focusing on Servers and other integrated devices [smartphones, tablets, etc], areas where Intel is significantly weaker then the competition.


I would expect Bulldozer to beat the first-generation Core i7 parts since the first-generation Core i7 parts are Bloomfield and Lynnfield and Thuban is more or less in line with how they perform. I doubt a six-core desktop Bulldozer will be slower than Thuban, and Bulldozer goes to 8 cores on the desktop.
April 12, 2011 10:25:07 PM

All I care about is how BD will perform for the price. I am itching to upgrade my nearly 4-year-old AMD system, and this time around I want to build something relatively fast, so I'm willing to spend @200 bucks on a CPU (i5-2500 anyone?). I've always used AMD, but I have no allegiance to them anymore as I want something that will nail distributed computing apps (FAH, World Community Grid) and Flight Simulator X. I don't need huge parallelism capabilities- 4 cores is fine- I just want something that can crunch numbers really fast, which is why I am cautiously optimistic about what BD can deliver for $20-40 less than a comparable SB.
April 12, 2011 10:48:53 PM

gamerk316 said:
If BD were as good as roumered, AMD would have been way out in front by releasing some benchmarks that show its computing power, ESPECIALLY during the P67 recall, which was a prime oppertunity to gain customers. AMD was silent, and people are back to buying P67 systems. That alone is cause for major concern; if BD were as good as promised, then AMD's marketing deparment should have been front and center during that fiasco. Never let an oppertunity go to waste, but thats exactly what AMD did during the P67 recall.


As someone who has been in the technology business for almost 20 years now, let me play out the scenario:

1. Intel launches SB in January.
2. AMD releases killer benchmarks in January to counter this.
3. All hell breaks loose in the distribution channel as demand slows and people sit on the sidelines.
4. Millions of dollars of inventory sits in the channel waiting to be returned.
5. OEMs ask AMD to write a big check.

I am not sure of what business you are in, but intentionally stalling sales is not going to help you get revenue. When people see real benchmarks they expect that the product is right around the corner and they sit out their purchasing decisions. That does not help our partners that made supply decisions about Q1 inventory last fall. You need to look at the big picture.
a b à CPUs
April 12, 2011 11:38:26 PM

jf-amd said:
As someone who has been in the technology business for almost 20 years now, let me play out the scenario:

1. Intel launches SB in January.
2. AMD releases killer benchmarks in January to counter this.
3. All hell breaks loose in the distribution channel as demand slows and people sit on the sidelines.
4. Millions of dollars of inventory sits in the channel waiting to be returned.
5. OEMs ask AMD to write a big check.

I am not sure of what business you are in, but intentionally stalling sales is not going to help you get revenue. When people see real benchmarks they expect that the product is right around the corner and they sit out their purchasing decisions. That does not help our partners that made supply decisions about Q1 inventory last fall. You need to look at the big picture.



Very Much Correct.
April 13, 2011 12:24:07 AM

MU_Engineer said:


If Bulldozer really is as good as rumored, who would want to buy any of the existing Stars-based Athlon II/Phenom II parts when they could wait a few months and get something massively faster? Intel shot themselves in the foot massively by releasing Core 2 benchmarks and hyping up the new CPU about six months before the actual launch date. It did hurt AMD, but it also rendered the existing P4s and Pentium Ds basically unsalable and Intel took it on the chin for quite some time. AMD doesn't want to tank sales of their existing CPUs before they are actually ready to ship Bulldozer, so they're quiet about Bulldozer. They saw what happened to Intel and don't particularly want to have that happen to them.

Yeah Intel shot themselves in the foot so massively, that they did the same thing with Nehalem and Sandy Bridge. :sarcastic: 

I'm instead going to go with Cinebench becoming AMD's Super Pi when Bulldozer gets released.
a b à CPUs
April 13, 2011 12:13:47 PM

Quote:
The one FPU per module is able to perform either 2x128 bit or 1x256 bit operation per clock cycle. The only 256-bit operations right now are AVX operations. The rest are all 128 bit or less. AVX is brand-spanking-new set of SIMD instructions and will thus be rarely seen in actual shipping software for quite some time. It appears that AMD made the decision to optimize for the most likely usage case where a CPU will very rarely see AVX instructions in real (e.g. non-benchmark) programs and thus the FPU operating in 2x128-bit mode will perform just as well as two full 256-bit-capable FPUs, but take up less die space. Also, you are assuming that both cores would be ceaselessly hammering on the FPU with no pipeline stalls or branch mispredictions or anything else that could possibly result in one core not being able to call the FPU to execute an instruction every single clock cycle. That is very unlikely, else Intel's HyperThreading would give exactly zero benefit.


We shall see when it comes to FP performance; I'm not convinced.

More importantly, when judging performance, I always judge by the worst possible case, which would be one of the FP units getting hammered. To say its "unlikely" is not good enough in my view; its either a potential problem, or its not. After all, some people here would argue Hyperthreading DOES have almost no benifit...

Quote:
1. Intel launches SB in January.
2. AMD releases killer benchmarks in January to counter this.
3. All hell breaks loose in the distribution channel as demand slows and people sit on the sidelines.
4. Millions of dollars of inventory sits in the channel waiting to be returned.
5. OEMs ask AMD to write a big check.


6. BD releases mid-summer, meats expectations, and AMD and its partners have massive increase in sales, gaining market share in the process.

Short-term thinking leads to short-term results. Anyone considering a new BD platform likely isn't rushing out to buy a brand new Phenom II based system right now anyways. A lot of people here sat through the P67 recall, and promptly brought a P67 platform, because AMD has nothing to offer in that performance range. Thats lost sales, something AMD needs right about now.
April 13, 2011 12:44:21 PM

And people also bought X6's and graphics card for the same price as an i7 2600K.

AMD will sell all their bulldozers there's no doubt about that. Taking advantage of intels mess meant selling their ageing cpu's at higher prices than they might have done otherwise.

Now they are sitting on less last-gen inventory and ready to hit the ground running with two new chips in a couple of months time. Don't try to spin intels %&$@ up as an AMD loss, it was a clear win.
a b à CPUs
April 13, 2011 1:15:05 PM

^LOL, no Bob's sig should be "Although I don't work for AMD, I love to flog them for free!" :p 

I just wonder why AMD didn't follow JF's advice before Barcie launched late and underperforming.

At any rate, I do wonder why no OEM leaks to date (credible ones that is)..
April 13, 2011 1:18:29 PM

Yeah just remember when bulldozer spanks that 2500, it was you who couldn't wait another 2 months. Don't blame your impatience on AMD, you had plenty warning of what BD was going to be capable of and didn't listen.
April 13, 2011 1:35:29 PM

It's not in amd's best interest to release benchmarks, all that would do is force intels hand faster. It has already been suggested that intel pulling in ivy bridge is a response to bulldozers speed.

http://www.kitguru.net/components/cpu/jules/bulldozer-s...

http://www.xbitlabs.com/news/cpu/display/20110114134306...

Nvidia has opened up SLI to AM3+ mobos as well. The warning signs have been there for a long time, but dont expect AMD to come out with official benchmarks until launch.
April 14, 2011 5:46:51 AM

jf-amd said:
You should not worry about windows scheduler. The delta between running 1 thread on a module vs. 2 is very small. Hyperthreading is a huge hit (negative in some cases). While you might believe that one thread per module will give you the best performance this is not always the case.



Don't ya just hate it...I DO appreciate you though, regardless...
a b à CPUs
April 14, 2011 8:27:17 AM

jf-amd said:
As someone who has been in the technology business for almost 20 years now, let me play out the scenario:

1. Intel launches SB in January.
2. AMD releases killer benchmarks in January to counter this.
3. All hell breaks loose in the distribution channel as demand slows and people sit on the sidelines.
4. Millions of dollars of inventory sits in the channel waiting to be returned.
5. OEMs ask AMD to write a big check.

I am not sure of what business you are in, but intentionally stalling sales is not going to help you get revenue. When people see real benchmarks they expect that the product is right around the corner and they sit out their purchasing decisions. That does not help our partners that made supply decisions about Q1 inventory last fall. You need to look at the big picture.


What you could do... (Might work, mebbe)

1. Intel lunches SB in Jan.
2. AMD leaks crappy benchmarks to various non-reputable sites counter this. (normal people either buy Sandy Bridge or K10.5 chips whilst weird AMD loyalists like me will refute the benchmarks and keep waiting)
3. AMD releases AM3+ compatible chipsets ASAP to all their partners.
4. AMD lowers the RRP of current chips to compete with Intel SB
5. Either people will buy Sandy Bridge or current chips (the budget end would benefit most)
6. Bulldozer comes out, amazes everyone, most people can upgrade to it with mobo change
7. People stuck with budget end chips AMD can now magically upgrade to amazing new Bulldozer and are essentially tied to AMD for a future upgrade (budget end again)

Mind you this only works in theory.
a b à CPUs
April 14, 2011 10:01:55 AM

I should always add clauses. ;) 
April 14, 2011 7:35:18 PM

bobdozer said:
AMD will sell all their bulldozers there's no doubt about that.

Are their yield problems that bad?

Quote:
Taking advantage of intels mess meant selling their ageing cpu's at higher prices than they might have done otherwise.

They cut their prices at that time.


April 14, 2011 8:54:52 PM

Well Orochi-Bulldozer Architecture has the Radeon AIW Weapon as a logo. There is also Axe Cool Metal Fine Quartz & Zinc Hair/Body wash looks the same color! Kinda of looks like they want to split the sektor kinda!

Its all about competing in the Cloud Computing Field! Intel Has Xeons aimed their armed, AMD might have the GPu advantage if they can get more shaders on the Core which is convenient, add an embedded vpu on the mainboard, and you have two back-up solutions in case the PCI-E card fails!

If ATI is getting out a Radeon 7000 Series in Mayday payday, it gives them time to get something else embedded on the gpu die! GPU packs some powerful punches nowadays!

Also gives them a step up to release a refresh or new architecture by xmas to counter Kepler, and Drop a bomb maybe with Windows 8 Midori! Anyways Intel has larrabee waiting in the background that's probably Silicon-Graphite 155MHZ probably the same color, also wondering if they are going to release a Hair/Body wash with Silicon Graphite! Sell some of that stuff to the Killzone 3 Team, considering they marketing with Edge Gel at Walgreens! They need another color besides black and red night vision eyes! Throw it into the mix and they might have orange glowing eyes and look Happy Halloween!

Best Move is to get the logo on Ice Breaking Spiking Glue by Joico Co.!
April 14, 2011 8:56:50 PM

Wheres that Mad Gnome Dopey waiting to chop the thread in half? The Moderator one!
!