Sign in with
Sign up | Sign in
Your question

The chance to break into Dell's supplier chain has passed.

Last response: in CPUs
Share
Anonymous
a b à CPUs
March 3, 2005 5:09:25 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Fellow AMD admirers ;-),

Googling to see what anybody had to say about intel and cis turned up
this bit on AMD

http://money.cnn.com/2005/02/28/technology/techinvestor...

"AMD caught Intel pretty good with Opteron," says David Wu, an analyst
with Global Crown Partners. "If AMD can't beat Intel with Opteron, I
don't know if they ever will."

I'm going to get beaten up for it, but I don't think Opteron changed
the lowdown on AMD: very smart company, tries hard, never comes up
with anything really new.

Make Intel's life miserable with 64-bit x86? Score. Big win for end
users.

Break Intel's effective monopoly? Not that way. Okay, maybe not any
way, certainly not any way I can think of.

RM
Anonymous
a b à CPUs
March 3, 2005 5:09:26 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Robert Myers wrote:
> Fellow AMD admirers ;-),
>
> Googling to see what anybody had to say about intel and cis turned up
> this bit on AMD
>
> http://money.cnn.com/2005/02/28/technology/techinvestor...
>
> "AMD caught Intel pretty good with Opteron," says David Wu, an
analyst
> with Global Crown Partners. "If AMD can't beat Intel with Opteron, I
> don't know if they ever will."

You gotta really differentiate where your quote of the article ends and
where your own opinion starts. I was thinking the below quote was from
the article.

> I'm going to get beaten up for it, but I don't think Opteron changed
> the lowdown on AMD: very smart company, tries hard, never comes up
> with anything really new.

On purpose, it wants to create practical stuff that the market will
accept. Unlike hopeless science projects like Itanium.

> Make Intel's life miserable with 64-bit x86? Score. Big win for end
> users.

Well, it has managed to marginalize Itanium effectively. No way Itanium
will ever make it out of its niches now.


> Break Intel's effective monopoly? Not that way. Okay, maybe not any
> way, certainly not any way I can think of.

Well, it was never going to break into Dell no matter what. However,
AMD does need to spend some money on marketing itself. There's simply
no other way around it. Intel will always be able to sell more than AMD
with inferior products, simply on the power of marketing.

Yousuf Khan
Anonymous
a b à CPUs
March 4, 2005 10:57:54 AM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Robert Myers wrote:
> >You gotta really differentiate where your quote of the article ends
and
> >where your own opinion starts. I was thinking the below quote was
from
> >the article.
> >
> As you know, I usually use html-like notation <quote>,</quote> to set
> off extended quotes. I this particular case, it was a short quote,
> and the quote itself was a quote. I won't do it again.

This where Firefox and Thunderbird really make this stuff easier for
you. There are various extensions available for them, that automate
this process.

> >> I'm going to get beaten up for it, but I don't think Opteron
changed
> >> the lowdown on AMD: very smart company, tries hard, never comes up
> >> with anything really new.
> >
> >On purpose, it wants to create practical stuff that the market will
> >accept. Unlike hopeless science projects like Itanium.
> >
> Oh, hmmm. Was Itanium a science project? Intel certainly wanted to
> make a big score, and I applaud them for thinking they were doing the
> right science, no matter how inaccurate their prognostication turned
> out to be. The issue they thought they could see, the compiler
> problem, turned out to be harder than they thought. The biggest
> mistake I fault them on is that they seem to have lost control of the
> complexity of the architecture: way too many features, all of which
> had to be supported in hardware and, even more important, in
exception
> and recovery code.
>
> As to practical stuff vs. science projects, that's why I admire
intel.
> I admire their stubbornness. I'm an IBM admirer, too. To the extent
> that IBM has gotten more "practical," they've lost my respect, even
if
> I understand that they've had very little choice.
>
> The industry, Yousuf, is going to choke on its own vomit. More,
more,
> more x86? Same old bugs. Same old windoze. Same old creaky
> infrastructure. It takes an Intel or an IBM to break molds. AMD
> never.

X86's problems weren't really software, but hardware. Itanium did
nothing to make hardware any better. Itanium was continuing on with the
same old shared bus architecture that Intels have always had, despite
the fact that they were starting with a brand new software
architecture.

Yousuf Khan
Anonymous
a b à CPUs
March 4, 2005 12:00:29 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On 3 Mar 2005 13:15:55 -0800, "YKhan" <yjkhan@gmail.com> wrote:

>Robert Myers wrote:
>> Fellow AMD admirers ;-),
>>
>> Googling to see what anybody had to say about intel and cis turned up
>> this bit on AMD
>>
>> http://money.cnn.com/2005/02/28/technology/techinvestor...
>>
>> "AMD caught Intel pretty good with Opteron," says David Wu, an
>analyst
>> with Global Crown Partners. "If AMD can't beat Intel with Opteron, I
>> don't know if they ever will."
>
>You gotta really differentiate where your quote of the article ends and
>where your own opinion starts. I was thinking the below quote was from
>the article.
>
As you know, I usually use html-like notation <quote>,</quote> to set
off extended quotes. I this particular case, it was a short quote,
and the quote itself was a quote. I won't do it again.

>> I'm going to get beaten up for it, but I don't think Opteron changed
>> the lowdown on AMD: very smart company, tries hard, never comes up
>> with anything really new.
>
>On purpose, it wants to create practical stuff that the market will
>accept. Unlike hopeless science projects like Itanium.
>
Oh, hmmm. Was Itanium a science project? Intel certainly wanted to
make a big score, and I applaud them for thinking they were doing the
right science, no matter how inaccurate their prognostication turned
out to be. The issue they thought they could see, the compiler
problem, turned out to be harder than they thought. The biggest
mistake I fault them on is that they seem to have lost control of the
complexity of the architecture: way too many features, all of which
had to be supported in hardware and, even more important, in exception
and recovery code.

As to practical stuff vs. science projects, that's why I admire intel.
I admire their stubbornness. I'm an IBM admirer, too. To the extent
that IBM has gotten more "practical," they've lost my respect, even if
I understand that they've had very little choice.

The industry, Yousuf, is going to choke on its own vomit. More, more,
more x86? Same old bugs. Same old windoze. Same old creaky
infrastructure. It takes an Intel or an IBM to break molds. AMD
never.

>> Make Intel's life miserable with 64-bit x86? Score. Big win for end
>> users.
>
>Well, it has managed to marginalize Itanium effectively. No way Itanium
>will ever make it out of its niches now.
>
Oh, who knows really. I have a hard time visualizing how Itanium will
survive in a niche, to be honest. If it does, it will eventually
break out of the niche. You think if the big boyz are using Power and
Itanium, your local bit-jockey won't want to be able to say he's doing
the same, if the price is right?

>
>> Break Intel's effective monopoly? Not that way. Okay, maybe not any
>> way, certainly not any way I can think of.
>
>Well, it was never going to break into Dell no matter what. However,
>AMD does need to spend some money on marketing itself. There's simply
>no other way around it. Intel will always be able to sell more than AMD
>with inferior products, simply on the power of marketing.
>
That whole deal is going to fall apart when one of the operatives
carrying messages written on flash paper back and forth between Santa
Clara and Round Rock is intercepted by AMD agents.

RM
Anonymous
a b à CPUs
March 4, 2005 2:35:55 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

YKhan wrote:
snip
>
>
> X86's problems weren't really software, but hardware. Itanium did
> nothing to make hardware any better. Itanium was continuing on with the
> same old shared bus architecture that Intels have always had, despite
> the fact that they were starting with a brand new software
> architecture.
>
> Yousuf Khan
>

I would disagree. Getting rid of shared FSB not a problem, not that big
of a deal. Although getting the board manufacturers to stop using junk
board material and learn how to control impedance is a different story.
And high speed link boards need controlled impedance.

In my opinion the real problem with Itanium is that its objectives had
nothing to do with the customers/users objectives. They (customers) had
no reason to embrace Itanium.

Put on your customer hat, of whatever persuasion. Try to think of a
real reason any end user would be desirous of using Itanium, as actually
delivered at the time it was delivered.

del cecchi
Anonymous
a b à CPUs
March 4, 2005 5:17:46 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Fri, 04 Mar 2005 11:35:55 -0600, Del Cecchi
<cecchinospam@us.ibm.com> wrote:

<snip>

>
>In my opinion the real problem with Itanium is that its objectives had
>nothing to do with the customers/users objectives. They (customers) had
>no reason to embrace Itanium.
>
Intel pursued the VLIW-like architecture for the same reason IBM
worked Daisy: the superchip to subsume all other chips. With
virtualization and whatever RAS needed to make it acceptable to IBM
and its mainframe-type customers, Itanium was to replace _everything_,
I think.

Opteron really has put Itanium into a no-man's-land: squeezed between
a very capable x86 and an actual mainframe manufacturer (ibm) that's
apparently not interested in abandoning its own architecture.

Had it worked, itanium would have satisfied customers' needs nicely: a
chip that would execute non-native binaries (including 360 and x86),
mainframe features, and a variety of vendors to choose from
("industry-standard architecture," in intel's code phrase).

>Put on your customer hat, of whatever persuasion. Try to think of a
>real reason any end user would be desirous of using Itanium, as actually
>delivered at the time it was delivered.
>
Oh, well, now that "as actually delivered" is a problem! x86
emulation never worked the way it was supposed to.

RM
Anonymous
a b à CPUs
March 5, 2005 10:37:36 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Sat, 05 Mar 2005 17:54:56 -0500, George Macdonald
<fammacd=!SPAM^nothanks@tellurian.com> wrote:

>On Sat, 05 Mar 2005 07:17:17 -0500, Robert Myers <rmyers1400@comcast.net>
>wrote:
>
>>On Sat, 05 Mar 2005 05:34:54 -0500, George Macdonald
>><fammacd=!SPAM^nothanks@tellurian.com> wrote:
>>

<snip>

>
>>Some part of me wondered whether AMD could break into Dell. All other
>>considerations aside, it's not like Dell to add unnecessary
>>complication to its life. I'm sure they looked at how many sales they
>>might lose vs. the engineering costs and decided it wasn't worth it.
>>However that calculation came out, I'm sure they used it to squeeze
>>Intel a little harder.
>
>Engineering?... Dell? Hell, they don't even have a Serverworks chipset to
>diddle with any longer - it's just Intel generic boxen top to bottom. I
>think we all knew -- it was discussed here at length -- that Dell was just
>using AMD as a manouvering device to "squeeze" Intel.
>
I've lost track of Intel server chipsets. Is _anybody_ but Intel
making Server chipsets for Intel processors?

>>The fact that Dell holds the line makes life much tougher for AMD, and
>>if Opteron with Intel scrambling in the dust didn't do it, I don't
>>know what would.
>
>IMO Dell is going to get slaughtered in the server space anyway... unless
>we have the unlikely situation that IBM decides to OEM Hurricane. Like
>every other business, Dell will go through a bad spell and its precarious
>business model could mean it will not weather the storm. Lack of depth
>eventually tells... and then we'll get a new pretender.:-)
>
You think you can see that far into the future? Nothing would please
me more than to see Dell out of the dominant position. But they have
got it all worked out so smoothly.

The rules are all about to change with multicore chips. With
bandwidth requirements going through the roof, I think the day of the
motherboard is about to be at hand. How are they going to route all
that stuff, anyway? That sounds like a bad scene for Dell, except
that motherboards of requisite quality will be commodities.

AMD will make somebody else successful? Who? Just like the auto
business, the computer business is a business of vanishing margins,
and Dell is tops at that game.

How is Dell going to get slaughtered?

>>>Hey I thought we were supposed to get an
>>>official name for "Desktrino" this week. Did I miss it in all the
>>>excitement?:-)
>>
>>I'm more interested in where Intel is headed with interconnect.
>>Mellanox is now selling 10Gb/s infiniband adapters for $69 in
>>quantity:
>>
>>http://www.mellanox.com/news/press/pr_030105.html
>
>That works through a PCI Express interconnect. Pathscale has a direct
>connect to Hypertransport 4x inifiniband adapter
>http://www.pathscale.com/infinipath.html - dunno what "commodity priced"
>means... nor what the size of that market might turn out to be.

That's good to know about. That's a space in which AMD has a fighting
chance.

RM
Anonymous
a b à CPUs
March 5, 2005 10:52:46 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Robert Myers wrote:
> Had it worked, itanium would have satisfied customers' needs nicely:
a
> chip that would execute non-native binaries (including 360 and x86),
> mainframe features, and a variety of vendors to choose from
> ("industry-standard architecture," in intel's code phrase).

If Intel had done that, i.e. come up with an architecture that could
emulate many other architectures, then it would've guaranteed Itanium
of 100% success. A chip that could emulate both x86 and PA-RISC at full
speed, at the very least; possibly something that could translate
anything. But instead it came up with this braindead VLIW/EPIC concept
which was an answer to nobody's needs.

That would've meant a RISC-like architecture, as RISC translates to
RISC very well, and as well as CISC.

Part of the reason I'm not so confident about IBM's Cell processor
either, is because of this same reason, it's not really answering
anybody's needs. It's not an architecture that can take over from
anybody else's architecture, except for PowerPC itself which is its
native architecture.

The Transmeta concept held a lot of excitement for me at one time, not
because of its power savings but its code-morphing. But its internal
VLIW was really only meant for translating x86 and nothing else. They
might as well have not bothered with VLIW as the underlying
architecture.

> >Put on your customer hat, of whatever persuasion. Try to think of a

> >real reason any end user would be desirous of using Itanium, as
actually
> >delivered at the time it was delivered.
> >
> Oh, well, now that "as actually delivered" is a problem! x86
> emulation never worked the way it was supposed to.

I think if somebody can come up with a code-morpher that can translate
anything with a small firmware upgrade at only a smallish 20% loss of
performance, will finally have themselves a winner something that can
replace anything. Buy the one processor and you get something that can
run PowerPC, Sparc, MIPS, and x86 on the same system.

Yousuf Khan
Anonymous
a b à CPUs
March 6, 2005 6:10:31 AM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Sat, 05 Mar 2005 19:37:36 -0500, Robert Myers <rmyers1400@comcast.net>
wrote:

>On Sat, 05 Mar 2005 17:54:56 -0500, George Macdonald
><fammacd=!SPAM^nothanks@tellurian.com> wrote:
>
>>On Sat, 05 Mar 2005 07:17:17 -0500, Robert Myers <rmyers1400@comcast.net>
>>wrote:
>>
>>>On Sat, 05 Mar 2005 05:34:54 -0500, George Macdonald
>>><fammacd=!SPAM^nothanks@tellurian.com> wrote:
>>>
>
><snip>
>
>>
>>>Some part of me wondered whether AMD could break into Dell. All other
>>>considerations aside, it's not like Dell to add unnecessary
>>>complication to its life. I'm sure they looked at how many sales they
>>>might lose vs. the engineering costs and decided it wasn't worth it.
>>>However that calculation came out, I'm sure they used it to squeeze
>>>Intel a little harder.
>>
>>Engineering?... Dell? Hell, they don't even have a Serverworks chipset to
>>diddle with any longer - it's just Intel generic boxen top to bottom. I
>>think we all knew -- it was discussed here at length -- that Dell was just
>>using AMD as a manouvering device to "squeeze" Intel.
>>
>I've lost track of Intel server chipsets. Is _anybody_ but Intel
>making Server chipsets for Intel processors?

Well as mentioned, there's IBM's Hurricane - IBM *does* like to add some of
its own "value" and it does sound err, nice. SiS just got a license for
1066MHz FSB but I'm not sure whether they intend to go into server stuff.
AYK, traditionally, Intel server chipsets have been so-so.

>>>The fact that Dell holds the line makes life much tougher for AMD, and
>>>if Opteron with Intel scrambling in the dust didn't do it, I don't
>>>know what would.
>>
>>IMO Dell is going to get slaughtered in the server space anyway... unless
>>we have the unlikely situation that IBM decides to OEM Hurricane. Like
>>every other business, Dell will go through a bad spell and its precarious
>>business model could mean it will not weather the storm. Lack of depth
>>eventually tells... and then we'll get a new pretender.:-)
>>
>You think you can see that far into the future? Nothing would please
>me more than to see Dell out of the dominant position. But they have
>got it all worked out so smoothly.

Just prognosticating.:-) Hell I'm at least as good as your average
anal...yst and I'm quite sure the Dell model is fragile - every business
that has traded on paper-thin margins has gone down with a crash; ever hear
of Crazy Eddie? I just hope they don't take too many others down along the
way.

>The rules are all about to change with multicore chips. With
>bandwidth requirements going through the roof, I think the day of the
>motherboard is about to be at hand. How are they going to route all
>that stuff, anyway? That sounds like a bad scene for Dell, except
>that motherboards of requisite quality will be commodities.

Again, AMD is much better positioned from the POV of scalability here: add
a Hypertransport link as necessary - its already in the CPUs and the
chipset/mbrd companies are all clued up on implementing... easy stuff.
Current desktop mbrds have more than enough bandwidth -- you need to take a
look at the grass on the other side -- so adding a little won't be a big
deal. nForce3/4 are single chips!

>AMD will make somebody else successful? Who? Just like the auto
>business, the computer business is a business of vanishing margins,
>and Dell is tops at that game.

Yep there's some truth in that auto comparison and, like I've said here
before, the PC/Server business is, like the auto business, now pretty much
a cyclical replacement market - you just hope that everybody doesn't
synchronize on their cycles.:-) Right about now, I'd think it's a fair bet
that Dell is taking a very close look at Lenovo's expansion strategy
options and monitoring their actual moves. Hell who knows?.... with
Carleton gone, HP may even get its hat on straight... and Sun has two
options, one of which is die.

>How is Dell going to get slaughtered?

Technology-wise, two directions in server-space that I see off-hand: IBM
will have a better widget with its Xeon MP chipset and Sun will have a
better mid to upper-scale server with Opteron. HP is sounding enthusiastic
about Opteron too, though they're obviously not going to throw the (Intel)
baby out with the bath water when it comes down to it.

--
Rgds, George Macdonald
Anonymous
a b à CPUs
March 6, 2005 10:01:55 AM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Sun, 06 Mar 2005 03:10:31 -0500, George Macdonald
<fammacd=!SPAM^nothanks@tellurian.com> wrote:

>On Sat, 05 Mar 2005 19:37:36 -0500, Robert Myers <rmyers1400@comcast.net>
>wrote:
>
>>On Sat, 05 Mar 2005 17:54:56 -0500, George Macdonald
>><fammacd=!SPAM^nothanks@tellurian.com> wrote:
>>
>>>On Sat, 05 Mar 2005 07:17:17 -0500, Robert Myers <rmyers1400@comcast.net>
>>>wrote:
>>>

<snip>

>>>
>>I've lost track of Intel server chipsets. Is _anybody_ but Intel
>>making Server chipsets for Intel processors?
>
>Well as mentioned, there's IBM's Hurricane - IBM *does* like to add some of
>its own "value" and it does sound err, nice. SiS just got a license for
>1066MHz FSB but I'm not sure whether they intend to go into server stuff.
>AYK, traditionally, Intel server chipsets have been so-so.
>
Intel does only as well as it has to, I'm sure. I'm sure that's what
infuriates many techies, but a business type looking at how Intel
plays its cards. They'll do just as well as they have to to stay at
the table...that's the Intel guarantee.

>>>>The fact that Dell holds the line makes life much tougher for AMD, and
>>>>if Opteron with Intel scrambling in the dust didn't do it, I don't
>>>>know what would.
>>>
>>>IMO Dell is going to get slaughtered in the server space anyway... unless
>>>we have the unlikely situation that IBM decides to OEM Hurricane. Like
>>>every other business, Dell will go through a bad spell and its precarious
>>>business model could mean it will not weather the storm. Lack of depth
>>>eventually tells... and then we'll get a new pretender.:-)
>>>
>>You think you can see that far into the future? Nothing would please
>>me more than to see Dell out of the dominant position. But they have
>>got it all worked out so smoothly.
>
>Just prognosticating.:-) Hell I'm at least as good as your average
>anal...yst and I'm quite sure the Dell model is fragile - every business
>that has traded on paper-thin margins has gone down with a crash; ever hear
>of Crazy Eddie? I just hope they don't take too many others down along the
>way.
>
I'll guess there are too many people watching Dell in a way that Crazy
Eddie was never watched and even Enron was never watched.

You may be right about Lenovo, but that deal is surely structured so
that Lenovo can't touch the server space.

>>The rules are all about to change with multicore chips. With
>>bandwidth requirements going through the roof, I think the day of the
>>motherboard is about to be at hand. How are they going to route all
>>that stuff, anyway? That sounds like a bad scene for Dell, except
>>that motherboards of requisite quality will be commodities.
>
>Again, AMD is much better positioned from the POV of scalability here: add
>a Hypertransport link as necessary - its already in the CPUs and the
>chipset/mbrd companies are all clued up on implementing... easy stuff.
>Current desktop mbrds have more than enough bandwidth -- you need to take a
>look at the grass on the other side -- so adding a little won't be a big
>deal. nForce3/4 are single chips!
>
I'm skeptical that it actually works that way above four processors.
Take a look at tpmc sorted by raw performance

http://www.tpc.org/tpcc/results/tpcc_results.asp?print=...

I think the first Opteron entry is a RackSaver QuatreX-64 Server 4P,
with a score of 82,226, with Power and Itanium up in the millions.
It's true, the $/tpmc is very attractive at $2.72, but the claim you
are making is about scalability. I think AMD has designed a sizzling
chip for the 4P space.

>>AMD will make somebody else successful? Who? Just like the auto
>>business, the computer business is a business of vanishing margins,
>>and Dell is tops at that game.
>
>Yep there's some truth in that auto comparison and, like I've said here
>before, the PC/Server business is, like the auto business, now pretty much
>a cyclical replacement market - you just hope that everybody doesn't
>synchronize on their cycles.:-) Right about now, I'd think it's a fair bet
>that Dell is taking a very close look at Lenovo's expansion strategy
>options and monitoring their actual moves. Hell who knows?.... with
>Carleton gone, HP may even get its hat on straight... and Sun has two
>options, one of which is die.
>
HP or Sun is going to save itself by becoming the king of low-priced
4P Opteron servers, the space that IBM and Dell have left open for
them? Just writing the sentence down would make me want to sell the
stock of either. I'm sure Lenovo is a cause for concern on Dell's
part.

>>How is Dell going to get slaughtered?
>
>Technology-wise, two directions in server-space that I see off-hand: IBM
>will have a better widget with its Xeon MP chipset and Sun will have a
>better mid to upper-scale server with Opteron. HP is sounding enthusiastic
>about Opteron too, though they're obviously not going to throw the (Intel)
>baby out with the bath water when it comes down to it.

HP's future is itanium. Sun doesn't have a future. If something
kills Dell, it won't be Dell's failure to adopt AMD that does it.

RM
Anonymous
a b à CPUs
March 6, 2005 10:40:53 AM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On 5 Mar 2005 19:52:46 -0800, "YKhan" <yjkhan@gmail.com> wrote:

>Robert Myers wrote:
>> Had it worked, itanium would have satisfied customers' needs nicely:
>a
>> chip that would execute non-native binaries (including 360 and x86),
>> mainframe features, and a variety of vendors to choose from
>> ("industry-standard architecture," in intel's code phrase).
>
>If Intel had done that, i.e. come up with an architecture that could
>emulate many other architectures, then it would've guaranteed Itanium
>of 100% success. A chip that could emulate both x86 and PA-RISC at full
>speed, at the very least; possibly something that could translate
>anything. But instead it came up with this braindead VLIW/EPIC concept
>which was an answer to nobody's needs.
>
Intel thought it was taking the best ideas available at the time it
started the project. IBM had a huge investment in VLIW, and Elbrus
was making wild claims about what it could do.

Somebody who doesn't actually do computer architecture probably has a
very poor idea of all the constraints that operate in that universe,
but I'll stick with my notion that Intel/HP's mistake was that they
had a clean sheet of paper and let too much coffee get spilled on it
from too many different people.

>That would've meant a RISC-like architecture, as RISC translates to
>RISC very well, and as well as CISC.
>
>Part of the reason I'm not so confident about IBM's Cell processor
>either, is because of this same reason, it's not really answering
>anybody's needs. It's not an architecture that can take over from
>anybody else's architecture, except for PowerPC itself which is its
>native architecture.
>
Nobody needs a home computer, and worldwide demand for computers will
be five units. The advantages of streaming processors is low power
consumption and high throughput.

>The Transmeta concept held a lot of excitement for me at one time, not
>because of its power savings but its code-morphing. But its internal
>VLIW was really only meant for translating x86 and nothing else. They
>might as well have not bothered with VLIW as the underlying
>architecture.
>
The belief was (I think) that the front end part was sufficiently
repetitive that it could be massaged heavily to deliver a very clean
instruction stream to the back end. The concept isn't completely
wrong, just not sufficiently right. The DynamoRio people just
announced a new release, but I haven't had a chance to try it. That's
an optimizing front-end driving CISC. That project was motivated by
Itanium, I think.

>> >Put on your customer hat, of whatever persuasion. Try to think of a
>
>> >real reason any end user would be desirous of using Itanium, as
>actually
>> >delivered at the time it was delivered.
>> >
>> Oh, well, now that "as actually delivered" is a problem! x86
>> emulation never worked the way it was supposed to.
>
>I think if somebody can come up with a code-morpher that can translate
>anything with a small firmware upgrade at only a smallish 20% loss of
>performance, will finally have themselves a winner something that can
>replace anything. Buy the one processor and you get something that can
>run PowerPC, Sparc, MIPS, and x86 on the same system.
>
That's what IBM (and Intel and probably Transmeta, although they never
admitted it) probably wanted to do. For free, you should get runtime
feedback-directed optimization to make up for the overhead of
morphing. That's the theory, anyway. Exception and recovery may not
be the biggest problem, but it's one big problem I know about.

RM
Anonymous
a b à CPUs
March 6, 2005 4:06:21 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Robert Myers wrote:
> Somebody who doesn't actually do computer architecture probably has a
> very poor idea of all the constraints that operate in that universe,
> but I'll stick with my notion that Intel/HP's mistake was that they
> had a clean sheet of paper and let too much coffee get spilled on it
> from too many different people.

I mean they achieved none of their original goals. Neither did Itanium
run x86 at close to full-speed. Nor did it simplify core design enough
to make the core very small, cheap to make, and/or fast to run. It
required massive amounts of cache to run making it expensive. It was
complicated, making it hard to transitition to the next miniaturization
process node. The x86 emulator was useless despite being put right into
silicon.

> Nobody needs a home computer, and worldwide demand for computers will
> be five units. The advantages of streaming processors is low power
> consumption and high throughput.

Five units of what?

They do need home electronics though. The sooner they can bring PC
technology into the realm of home electronics the better. I'm surprised
they can't get the cost of these things down any further. They were
making huge strides in reducing prices until now.

> That's what IBM (and Intel and probably Transmeta, although they never
> admitted it) probably wanted to do. For free, you should get runtime
> feedback-directed optimization to make up for the overhead of
> morphing. That's the theory, anyway. Exception and recovery may not
> be the biggest problem, but it's one big problem I know about.

What they really need is a kind of YACC (Yet Another Compiler Compiler)
for instruction sets. A most atomic of instruction sets that has as much
in common with other instruction sets as possible. Something that can
simply be table-based and do a simple lookup between emulated
instruction sets and its own native instruction set.

Yousuf Khan
Anonymous
a b à CPUs
March 6, 2005 4:08:58 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Robert Myers wrote:
> AMD will make somebody else successful? Who? Just like the auto
> business, the computer business is a business of vanishing margins,
> and Dell is tops at that game.
>
> How is Dell going to get slaughtered?

The Chinese are going to slaughter it. Dell might be able to convince
protectionist US congressman to save it in the US for a little while,
but they can't save Dell in the rest of the world.

Yousuf Khan
Anonymous
a b à CPUs
March 6, 2005 4:20:21 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Robert Myers wrote:
> Some part of me wondered whether AMD could break into Dell. All other
> considerations aside, it's not like Dell to add unnecessary
> complication to its life. I'm sure they looked at how many sales they
> might lose vs. the engineering costs and decided it wasn't worth it.
> However that calculation came out, I'm sure they used it to squeeze
> Intel a little harder.

Well, actually AMD has taken care of the systems engineering problem
completely for Dell. It created an ecosystem straight away for Opteron,
not just motherboards but complete barebones systems from Newisys. It
was so easy to make an Opteron system that people like IBM couldn't find
any excuse not to go with Opteron this time at all. Not to say that IBM
is thrilled to be having to sell Opterons, it would much rather
concentrate on Power and possibly Xeon, but it simply has no excuse not
to. So IBM is doing its most minimal job at selling Opterons.

So Dell has no excuse from a systems engineering point of view either.
But it does still have the marketing funds issue which I gather is much
more important to it.

> The fact that Dell holds the line makes life much tougher for AMD, and
> if Opteron with Intel scrambling in the dust didn't do it, I don't
> know what would.

AMD has been fine so far without it. AMD should really start asserting
itself and say that it is not expecting to sell anything to Dell. Even
when Dell says nice things about AMD, AMD should immediately put the
kibosh on the rumours. That'll really drive Dell nuts, it'll ruin their
negotiations with Intel. And it should continue doing that quarter after
quarter, that way Dell will only get regular discounts from Intel. When
Dell gets only regular discounts, then that puts all of Dell's
competitors at a level playing field against them.

Yousuf Khan
Anonymous
a b à CPUs
March 6, 2005 7:38:31 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Robert Myers wrote:
> You may be right about Lenovo, but that deal is surely structured so
> that Lenovo can't touch the server space.

Any servers that Lenovo sells won't be allowed to have the IBM name on
it, but they'll likely be able to sell Lenovo-branded servers none-the-less.

They'll be able to sell Lenovo IBM-branded products as add-ons to
servers sales from both IBM and Lenovo.

>>>How is Dell going to get slaughtered?
>>
>>Technology-wise, two directions in server-space that I see off-hand: IBM
>>will have a better widget with its Xeon MP chipset and Sun will have a
>>better mid to upper-scale server with Opteron. HP is sounding enthusiastic
>>about Opteron too, though they're obviously not going to throw the (Intel)
>>baby out with the bath water when it comes down to it.
>
>
> HP's future is itanium. Sun doesn't have a future. If something
> kills Dell, it won't be Dell's failure to adopt AMD that does it.

The only thing that will kill Dell is Intel's inability to support them
anymore.

Yousuf Khan
Anonymous
a b à CPUs
March 6, 2005 10:39:27 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Sun, 06 Mar 2005 13:06:21 -0500, Yousuf Khan <bbbl67@ezrs.com>
wrote:

>Robert Myers wrote:

<unsnip>

>>Part of the reason I'm not so confident about IBM's Cell processor
>>either, is because of this same reason, it's not really answering
>>anybody's needs. It's not an architecture that can take over from
>>anybody else's architecture, except for PowerPC itself which is its
>>native architecture.
>>

</unsnip>

>> Nobody needs a home computer, and worldwide demand for computers will
>> be five units. The advantages of streaming processors is low power
>> consumption and high throughput.
>
>Five units of what?
>
>They do need home electronics though. The sooner they can bring PC
>technology into the realm of home electronics the better. I'm surprised
>they can't get the cost of these things down any further. They were
>making huge strides in reducing prices until now.
>
Oh, come on, Yousuf, I was making a joking reference to the comments
of Watson of IBM about the worldwide need for computers (about five
should do it, he opined), and Olson of DEC on the need for computers
in the home (not needed at all). I unsnipped your comment, without
which the exchange makes no sense at all. Your dismissal of Cell may
be correct, but I don't think there's enough evidence anywhere for
anybody to draw any conclusions of any kind. I made reference to the
Watson and Olson opinions as a reminder of just how wrong people can
be. Olson didn't think the home computer was meeting anybody's needs,
either.

>> That's what IBM (and Intel and probably Transmeta, although they never
>> admitted it) probably wanted to do. For free, you should get runtime
>> feedback-directed optimization to make up for the overhead of
>> morphing. That's the theory, anyway. Exception and recovery may not
>> be the biggest problem, but it's one big problem I know about.
>
>What they really need is a kind of YACC (Yet Another Compiler Compiler)
>for instruction sets. A most atomic of instruction sets that has as much
>in common with other instruction sets as possible. Something that can
>simply be table-based and do a simple lookup between emulated
>instruction sets and its own native instruction set.
>

But it's processor state, not instruction sets, that's the problem.

RM
Anonymous
a b à CPUs
March 6, 2005 10:48:27 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Sun, 06 Mar 2005 13:20:21 -0500, Yousuf Khan <bbbl67@ezrs.com>
wrote:

>Robert Myers wrote:
>> Some part of me wondered whether AMD could break into Dell. All other
>> considerations aside, it's not like Dell to add unnecessary
>> complication to its life. I'm sure they looked at how many sales they
>> might lose vs. the engineering costs and decided it wasn't worth it.
>> However that calculation came out, I'm sure they used it to squeeze
>> Intel a little harder.
>
>Well, actually AMD has taken care of the systems engineering problem
>completely for Dell. It created an ecosystem straight away for Opteron,
>not just motherboards but complete barebones systems from Newisys. It
>was so easy to make an Opteron system that people like IBM couldn't find
>any excuse not to go with Opteron this time at all. Not to say that IBM
>is thrilled to be having to sell Opterons, it would much rather
>concentrate on Power and possibly Xeon, but it simply has no excuse not
>to. So IBM is doing its most minimal job at selling Opterons.
>
You don't think IBM's involvement with the process technology has
something to do with it selling Opteron? They're in bed with AMD. I
look at it the other way around. When Intel looks at them fiercely,
they can just shrug their shoulders and say, "What can we do? We
gotta pay our process guys, you know."

>So Dell has no excuse from a systems engineering point of view either.
>But it does still have the marketing funds issue which I gather is much
>more important to it.
>
>> The fact that Dell holds the line makes life much tougher for AMD, and
>> if Opteron with Intel scrambling in the dust didn't do it, I don't
>> know what would.
>
>AMD has been fine so far without it. AMD should really start asserting
>itself and say that it is not expecting to sell anything to Dell. Even
>when Dell says nice things about AMD, AMD should immediately put the
>kibosh on the rumours. That'll really drive Dell nuts, it'll ruin their
>negotiations with Intel. And it should continue doing that quarter after
>quarter, that way Dell will only get regular discounts from Intel. When
>Dell gets only regular discounts, then that puts all of Dell's
>competitors at a level playing field against them.
>
That gets to a level of speculation about how the big boys play the
game that I wouldn't want to get to. I'll buy the China thing. If
AMD can crack that market and if (say) Lenovo can make decent inroads
in the server space, then maybe it would be something significant for
AMD. It works in China just like it works anywhere else, maybe worse,
because it's probably a little more tolerant of the business practices
of Intel, which is building plants in China.

I'm sure you think I'm out to sell diminished prospects for AMD. I'm
not. I just don't see a path for AMD to turn technical superiority
into significantly greater sales.

RM
March 6, 2005 11:24:16 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Sun, 06 Mar 2005 07:40:53 -0500, Robert Myers wrote:

> On 5 Mar 2005 19:52:46 -0800, "YKhan" <yjkhan@gmail.com> wrote:
>
>>Robert Myers wrote:
>>> Had it worked, itanium would have satisfied customers' needs nicely:
>>a
>>> chip that would execute non-native binaries (including 360 and x86),
>>> mainframe features, and a variety of vendors to choose from
>>> ("industry-standard architecture," in intel's code phrase).
>>
>>If Intel had done that, i.e. come up with an architecture that could
>>emulate many other architectures, then it would've guaranteed Itanium
>>of 100% success. A chip that could emulate both x86 and PA-RISC at full
>>speed, at the very least; possibly something that could translate
>>anything. But instead it came up with this braindead VLIW/EPIC concept
>>which was an answer to nobody's needs.
>>
> Intel thought it was taking the best ideas available at the time it
> started the project. IBM had a huge investment in VLIW, and Elbrus
> was making wild claims about what it could do.

IBM never had a "huge investment" in VLIW. It was a research project, at
best. OTOH, Intel has a *huge* investment in VLIW, and it's a bus
that isn't going anywhere. It's too easy for us hardware folks to toss of
the hard problems to the compiler folk. History shows that this isn't a
good plan. Even if Intel *could* have pulled it off, where was the
incentive for the customers? They have a business to run and
processor technology isn't generally part of it.

> Somebody who doesn't actually do computer architecture probably has a
> very poor idea of all the constraints that operate in that universe, but
> I'll stick with my notion that Intel/HP's mistake was that they had a
> clean sheet of paper and let too much coffee get spilled on it from too
> many different people.

That was one, perhaps a big one. Intel's real problem, as I see it, is
that they didn't understand their customers. I've told the FS stories
here before. FS was doomed because the customers had no use for it and
they spoke *loudly*. Itanic is no different, except that Intel didn't
listen to their customers. They had a different agenda than their
customers; not a good position to be in.

>>That would've meant a RISC-like architecture, as RISC translates to RISC
>>very well, and as well as CISC.
>>
>>Part of the reason I'm not so confident about IBM's Cell processor
>>either, is because of this same reason, it's not really answering
>>anybody's needs. It's not an architecture that can take over from
>>anybody else's architecture, except for PowerPC itself which is its
>>native architecture.
>>
> Nobody needs a home computer, and worldwide demand for computers will be
> five units.

640Kb is enough for anyone, yada-yada-yada. It's all about missing the
point. Customers rule, architects don't.

> The advantages of streaming processors is low power consumption and high throughput.

You keep saying that, but so far you're alone in the woods. Maybe for the
codes you're interested in, you're right. ...but for most of us there are
surprises in life. We don't live it linearly.

>>The Transmeta concept held a lot of excitement for me at one time, not
>>because of its power savings but its code-morphing. But its internal
>>VLIW was really only meant for translating x86 and nothing else. They
>>might as well have not bothered with VLIW as the underlying
>>architecture.
>>
> The belief was (I think) that the front end part was sufficiently
> repetitive that it could be massaged heavily to deliver a very clean
> instruction stream to the back end. The concept isn't completely wrong,
> just not sufficiently right.

I worked (tangentially) on the original TMTA product. The "proof of
concept" was on MS Word. Let's call it "VC irrational exuberance". Yes
there was lots learned there, some of it interesting, but it came at a
time when Dr. Moore was still quite alive. Brute force won.

>>I think if somebody can come up with a code-morpher that can translate
>>anything with a small firmware upgrade at only a smallish 20% loss of
>>performance, will finally have themselves a winner something that can
>>replace anything. Buy the one processor and you get something that can
>>run PowerPC, Sparc, MIPS, and x86 on the same system.
>>
> That's what IBM (and Intel and probably Transmeta, although they never
> admitted it) probably wanted to do. For free, you should get runtime
> feedback-directed optimization to make up for the overhead of morphing.
> That's the theory, anyway. Exception and recovery may not be the
> biggest problem, but it's one big problem I know about.

As usual, theory says that it and reality are the same. Reality has a
different opinion.

--
Keith
Anonymous
a b à CPUs
March 7, 2005 12:05:58 AM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Robert Myers wrote:
>>>Nobody needs a home computer, and worldwide demand for computers will
>>>be five units. The advantages of streaming processors is low power
>>>consumption and high throughput.
>>
>>Five units of what?
>>
>>They do need home electronics though. The sooner they can bring PC
>>technology into the realm of home electronics the better. I'm surprised
>>they can't get the cost of these things down any further. They were
>>making huge strides in reducing prices until now.
>>
>
> Oh, come on, Yousuf, I was making a joking reference to the comments
> of Watson of IBM about the worldwide need for computers (about five
> should do it, he opined), and Olson of DEC on the need for computers
> in the home (not needed at all). I unsnipped your comment, without
> which the exchange makes no sense at all.

And I snipped them again, because even with them in, it still makes no
sense whatsoever. How old do you think I am, to have gotten that
reference? Even if I was an old foghat, it's doubtful I would've gotten
that reference without at least a reminder about who said it. Or at
least quotes around it to say it's a quote.

> Your dismissal of Cell may
> be correct, but I don't think there's enough evidence anywhere for
> anybody to draw any conclusions of any kind. I made reference to the
> Watson and Olson opinions as a reminder of just how wrong people can
> be. Olson didn't think the home computer was meeting anybody's needs,
> either.

I think it's safe to assume it's going to fail to live upto its hype.
The hype being that it'll sweep the world in every field including PCs.

And likely the comments that Olsen and Watson made about the lack of
demand for home computers was completely right for the times they were
uttered. The first PC was still likely decades away at those points in
time. Even Bill Gates' infamous, "640K oughta be enough", was probably
right on the money for that point in time.

However, the Cell is almost present-day technology now, and it's pretty
easy to see where it's going to go because it's not so far away.

>>What they really need is a kind of YACC (Yet Another Compiler Compiler)
>>for instruction sets. A most atomic of instruction sets that has as much
>>in common with other instruction sets as possible. Something that can
>>simply be table-based and do a simple lookup between emulated
>>instruction sets and its own native instruction set.
>>
>
>
> But it's processor state, not instruction sets, that's the problem.

What do you mean?

Yousuf Khan
March 7, 2005 12:08:48 AM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Sun, 06 Mar 2005 13:20:21 -0500, Yousuf Khan wrote:

> Robert Myers wrote:
>> Some part of me wondered whether AMD could break into Dell. All other
>> considerations aside, it's not like Dell to add unnecessary
>> complication to its life. I'm sure they looked at how many sales they
>> might lose vs. the engineering costs and decided it wasn't worth it.
>> However that calculation came out, I'm sure they used it to squeeze
>> Intel a little harder.
>
> Well, actually AMD has taken care of the systems engineering problem
> completely for Dell. It created an ecosystem straight away for Opteron,
> not just motherboards but complete barebones systems from Newisys. It
> was so easy to make an Opteron system that people like IBM couldn't find
> any excuse not to go with Opteron this time at all. Not to say that IBM
> is thrilled to be having to sell Opterons, it would much rather
> concentrate on Power and possibly Xeon, but it simply has no excuse not
> to. So IBM is doing its most minimal job at selling Opterons.

Kinda like OS/2? IBM isn't about doing what others easily can. It can be
described as a one-stop supermarket. "If you *really* want it, we have it!"

> So Dell has no excuse from a systems engineering point of view either.
> But it does still have the marketing funds issue which I gather is much
> more important to it.

Dell - systems engineering? Is that like "military intelligence"?

>> The fact that Dell holds the line makes life much tougher for AMD, and
>> if Opteron with Intel scrambling in the dust didn't do it, I don't
>> know what would.
>
> AMD has been fine so far without it. AMD should really start asserting
> itself and say that it is not expecting to sell anything to Dell. Even
> when Dell says nice things about AMD, AMD should immediately put the
> kibosh on the rumours. That'll really drive Dell nuts, it'll ruin their
> negotiations with Intel. And it should continue doing that quarter after
> quarter, that way Dell will only get regular discounts from Intel. When
> Dell gets only regular discounts, then that puts all of Dell's
> competitors at a level playing field against them.

I agree with the first few sentences. AMD should flat out tell the world
that they're not going after Dell, never! The second half I don't so much
agree with. Intel, nor Dell particularly care.

--
Keith
Anonymous
a b à CPUs
March 7, 2005 12:19:00 AM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Robert Myers wrote:
> You don't think IBM's involvement with the process technology has
> something to do with it selling Opteron? They're in bed with AMD. I
> look at it the other way around. When Intel looks at them fiercely,
> they can just shrug their shoulders and say, "What can we do? We
> gotta pay our process guys, you know."

The only thing that IBM's chip division tells its server division to
sell is Power, nothing else. Actually that's probably coming down from
the executive board of IBM, rather than one division to another.

IBM's chip division is in bed with AMD. IBM's server division is in bed
with Intel for Xeon.

>>AMD has been fine so far without it. AMD should really start asserting
>>itself and say that it is not expecting to sell anything to Dell. Even
>>when Dell says nice things about AMD, AMD should immediately put the
>>kibosh on the rumours. That'll really drive Dell nuts, it'll ruin their
>>negotiations with Intel. And it should continue doing that quarter after
>>quarter, that way Dell will only get regular discounts from Intel. When
>>Dell gets only regular discounts, then that puts all of Dell's
>>competitors at a level playing field against them.
>>
>
> That gets to a level of speculation about how the big boys play the
> game that I wouldn't want to get to. I'll buy the China thing. If
> AMD can crack that market and if (say) Lenovo can make decent inroads
> in the server space, then maybe it would be something significant for
> AMD. It works in China just like it works anywhere else, maybe worse,
> because it's probably a little more tolerant of the business practices
> of Intel, which is building plants in China.

Well, so is AMD. Neither is building anything like a full-fledged chip
plant in China, just packaging plants. It's likely that AMD will be the
first to build a full chip plant in China though, as the subsidies in
Europe are drying up. Ireland just had to withdraw a promise of
subsidies to Intel for its Irish plant, because the EU overruled it.

> I'm sure you think I'm out to sell diminished prospects for AMD. I'm
> not. I just don't see a path for AMD to turn technical superiority
> into significantly greater sales.

It's a matter of them playing dirty like Intel. It's the only way to do it.

Yousuf Khan
Anonymous
a b à CPUs
March 7, 2005 7:19:20 AM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

"Yousuf Khan" <bbbl67@ezrs.com> wrote in message
news:gOGdnf3mrJZBJrbfRVn-1g@rogers.com...
> Robert Myers wrote:
> > You don't think IBM's involvement with the process technology has
> > something to do with it selling Opteron? They're in bed with AMD.
I
> > look at it the other way around. When Intel looks at them fiercely,
> > they can just shrug their shoulders and say, "What can we do? We
> > gotta pay our process guys, you know."
>
> The only thing that IBM's chip division tells its server division to
> sell is Power, nothing else. Actually that's probably coming down from
> the executive board of IBM, rather than one division to another.

You smoking that BC bud again, up there in canuckistan?

>
> IBM's chip division is in bed with AMD. IBM's server division is in
bed
> with Intel for Xeon.
>
I don't even know what this sentence is supposed to mean. Maybe you
didn't notice IBM's last? reorganization?
This is almost as funny as the stuff from "the sun never sets on ibm"
about how ibm deliberately made S/3 not be 360 compatible....

snipitee doo dah.

del
Anonymous
a b à CPUs
March 7, 2005 7:30:30 AM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

"Yousuf Khan" <bbbl67@ezrs.com> wrote in message
news:gOOdnfSgc9DZ5LbfRVn-3A@rogers.com...
>
> They do need home electronics though. The sooner they can bring PC
> technology into the realm of home electronics the better. I'm
surprised
> they can't get the cost of these things down any further. They were
> making huge strides in reducing prices until now.
>
I snipped it all, although I can't believe that someone educated in
computers would be ignorant of both watson's and olson's remarks along
with Gary Killdall flying and gates' 640k.

I was just out at sam's club the other day, and they were selling, for
550 bucks retail or the cost of a nice middle of the road TV, a Compaq
AMD system with a 17 inch flat CRT monitor (not lcd), 512MB, 180 GB
disk (might have been 250, don't remember for sure), XP, about 8 USB
ports, sound, etc etc. Even a little reader for the memory cards out of
cameras right on the front.

Computers already are into the realm of home electronics.

del cecchi
Anonymous
a b à CPUs
March 7, 2005 11:10:30 AM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Sun, 06 Mar 2005 20:24:16 -0500, keith <krw@att.bizzzz> wrote:

>On Sun, 06 Mar 2005 07:40:53 -0500, Robert Myers wrote:
>
>> On 5 Mar 2005 19:52:46 -0800, "YKhan" <yjkhan@gmail.com> wrote:
>>

<snip>

>>>
>>>If Intel had done that, i.e. come up with an architecture that could
>>>emulate many other architectures, then it would've guaranteed Itanium
>>>of 100% success. A chip that could emulate both x86 and PA-RISC at full
>>>speed, at the very least; possibly something that could translate
>>>anything. But instead it came up with this braindead VLIW/EPIC concept
>>>which was an answer to nobody's needs.
>>>
>> Intel thought it was taking the best ideas available at the time it
>> started the project. IBM had a huge investment in VLIW, and Elbrus
>> was making wild claims about what it could do.
>
>IBM never had a "huge investment" in VLIW. It was a research project, at
>best. OTOH, Intel has a *huge* investment in VLIW, and it's a bus
>that isn't going anywhere. It's too easy for us hardware folks to toss of
>the hard problems to the compiler folk. History shows that this isn't a
>good plan. Even if Intel *could* have pulled it off, where was the
>incentive for the customers? They have a business to run and
>processor technology isn't generally part of it.
>
You mean the work required to tune? People will optimize the hell out
of compute intensive code--to a point. The work required to get the
world-beating SpecFP numbers is probably beyond that point.

>> Somebody who doesn't actually do computer architecture probably has a
>> very poor idea of all the constraints that operate in that universe, but
>> I'll stick with my notion that Intel/HP's mistake was that they had a
>> clean sheet of paper and let too much coffee get spilled on it from too
>> many different people.
>
>That was one, perhaps a big one. Intel's real problem, as I see it, is
>that they didn't understand their customers. I've told the FS stories
>here before. FS was doomed because the customers had no use for it and
>they spoke *loudly*. Itanic is no different, except that Intel didn't
>listen to their customers. They had a different agenda than their
>customers; not a good position to be in.
>
If alpha and pa-risc hadn't been killed off, I might agree with you
about Itanium. No one is going to abandon the high-end to an IBM
monopoly. Never happen (again).

I gather that Future Systems eventually became AS/400. We'll never
know what might have become of Itanium if it hadn't been such a
committee enterprise. The 8080, after all, was not a particularly
superior processor design, and nobody needed *it*, either.

<snip>

>
>> The advantages of streaming processors is low power consumption and high throughput.
>
>You keep saying that, but so far you're alone in the woods. Maybe for the
>codes you're interested in, you're right. ...but for most of us there are
>surprises in life. We don't live it linearly.
>
I'm definitely not alone in the woods on this one, Keith. Go look at
Dally's papers on Brook and Stream. Take a minute and visit
gpgpu.org. I could dump you dozens of papers of people doing stuff
other than graphics on stream processors, and they are doing a helluva
lot of graphics, easily found with google, gpgpu, or by checking out
siggraph conferences. Network processors are just another version of
the same story. Network processors are right at the soul of
mainstream computing, and they're going to move right onto the die.

With everything having turned into point-to-point links, computers
have turned into packet processors already. Current processing is the
equivalent of loading a container ship by hand-loading everything into
containers, loading them onto the container ship, and hand-unloading
at the other end. Only a matter of time before people figure out how
to leave things in the container for more of the trip, as the world
already does with physical cargo.

Power consumption matters. That's one point about BlueGene I've
conceded repeatedly and loudly.

Stream processors have the disadvantage that it's a wildly different
computing paradigm. I'd be worried if *I* had to propose and work
through the new ways of coding. Fortunately, I don't. It's
happening.

The harder question is *why* any of this is going to happen. A lower
power data center would be a very big deal, but nobody's going to do a
project like that from scratch. PC's are already plenty powerful
enough, or so the truism goes. I don't believe it, but somebody has
to come up with the killer app, and Sony apparently thinks they have
it. We'll see.

>>>The Transmeta concept held a lot of excitement for me at one time, not
>>>because of its power savings but its code-morphing. But its internal
>>>VLIW was really only meant for translating x86 and nothing else. They
>>>might as well have not bothered with VLIW as the underlying
>>>architecture.
>>>
>> The belief was (I think) that the front end part was sufficiently
>> repetitive that it could be massaged heavily to deliver a very clean
>> instruction stream to the back end. The concept isn't completely wrong,
>> just not sufficiently right.
>
>I worked (tangentially) on the original TMTA product. The "proof of
>concept" was on MS Word. Let's call it "VC irrational exuberance". Yes
>there was lots learned there, some of it interesting, but it came at a
>time when Dr. Moore was still quite alive. Brute force won.
>
On the face of it, MS word doesn't seem like it should work because of
a huge number of unpredictable code paths. Turns out that even a word
processing program is fairly repetitive. Do you know if they included
exception and recovery in the analysis?

>>>I think if somebody can come up with a code-morpher that can translate
>>>anything with a small firmware upgrade at only a smallish 20% loss of
>>>performance, will finally have themselves a winner something that can
>>>replace anything. Buy the one processor and you get something that can
>>>run PowerPC, Sparc, MIPS, and x86 on the same system.
>>>
>> That's what IBM (and Intel and probably Transmeta, although they never
>> admitted it) probably wanted to do. For free, you should get runtime
>> feedback-directed optimization to make up for the overhead of morphing.
>> That's the theory, anyway. Exception and recovery may not be the
>> biggest problem, but it's one big problem I know about.
>
>As usual, theory says that it and reality are the same. Reality has a
>different opinion.

It's still worth understanding why. The only way to make things go
faster, beyond a certain point, is to make them predictable.

RM
Anonymous
a b à CPUs
March 7, 2005 11:34:04 AM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Sun, 06 Mar 2005 21:05:58 -0500, Yousuf Khan <bbbl67@ezrs.com>
wrote:

>Robert Myers wrote:

<snip>

> > Your dismissal of Cell may
>> be correct, but I don't think there's enough evidence anywhere for
>> anybody to draw any conclusions of any kind. I made reference to the
>> Watson and Olson opinions as a reminder of just how wrong people can
>> be. Olson didn't think the home computer was meeting anybody's needs,
>> either.
>
>I think it's safe to assume it's going to fail to live upto its hype.
>The hype being that it'll sweep the world in every field including PCs.
>
That's called knocking down a straw man. Sure, there are some game
players getting a little carried away. There is simply no way of
knowing, until it plays itself out, how big a deal this is going to
be. I hope somebody at Intel is paying attention.

>And likely the comments that Olsen and Watson made about the lack of
>demand for home computers was completely right for the times they were
>uttered.

Watson was closer to right than Olsen, and Olsen was completely wrong,
even for his time. The evidence was on the table, although he was two
years ahead of the release of VisiCalc (1977 vs. 1979).

>The first PC was still likely decades away at those points in
>time. Even Bill Gates' infamous, "640K oughta be enough", was probably
>right on the money for that point in time.
>
Several candidates for the "First PC" had been out for several years
by the time Olsen stuck his foot in his mouth. The Apple I was
released the year before. Gates was an idiot, if he ever said such a
thing, and I don't think he actually did. Think of a 1000x1000 color
bitmap.

>However, the Cell is almost present-day technology now, and it's pretty
>easy to see where it's going to go because it's not so far away.
>
Why don't you be a little more specific in your predictions, since
they're so easy to make?

>>>What they really need is a kind of YACC (Yet Another Compiler Compiler)
>>>for instruction sets. A most atomic of instruction sets that has as much
>>>in common with other instruction sets as possible. Something that can
>>>simply be table-based and do a simple lookup between emulated
>>>instruction sets and its own native instruction set.
>>>
>>
>> But it's processor state, not instruction sets, that's the problem.
>
>What do you mean?
>
For itanium, the actual effect of an instruction depends on a great
many past events that have to be kept track of (state). The op-code
appears to act on a few registers. The actual instruction operates on
a space of much larger dimensionality. x86 also has state that is
sufficiently scrambled that it's amazing that vmware can do what it
does. The problem is *much* harder than translating instructions,
especially if you want to take advantage of all of itanium's wigetry
to optimize peformance. And for every interrupt, all that state has
to be kept track of and acted upon appropriately, perhaps involving
elaborate unwinding of provisional actions.

RM
Anonymous
a b à CPUs
March 7, 2005 4:09:56 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Robert Myers wrote:
> >I think it's safe to assume it's going to fail to live upto its
hype.
> >The hype being that it'll sweep the world in every field including
PCs.
> >
> That's called knocking down a straw man. Sure, there are some game
> players getting a little carried away. There is simply no way of
> knowing, until it plays itself out, how big a deal this is going to
> be. I hope somebody at Intel is paying attention.

The only thing it's guaranteed to be used in is Playstation as a CPU.
One of the hypes is that's going to replace the GPUs in graphics cards,
and the entire x86 processor in PCs. It's not likely going to replace
any of the existing GPUs out there, nor any of the CPUs. Playstation
will in fact continue to have a GPU from Nvidia. Another hype is that
it's going to be used inside Apple Macs soon, again not at all likely.

> >And likely the comments that Olsen and Watson made about the lack of

> >demand for home computers was completely right for the times they
were
> >uttered.
>
> Watson was closer to right than Olsen, and Olsen was completely
wrong,
> even for his time. The evidence was on the table, although he was
two
> years ahead of the release of VisiCalc (1977 vs. 1979).
>
> >The first PC was still likely decades away at those points in
> >time. Even Bill Gates' infamous, "640K oughta be enough", was
probably
> >right on the money for that point in time.
> >
> Several candidates for the "First PC" had been out for several years
> by the time Olsen stuck his foot in his mouth. The Apple I was
> released the year before. Gates was an idiot, if he ever said such a
> thing, and I don't think he actually did. Think of a 1000x1000 color
> bitmap.

Well, then maybe Olsen was wrong.

As for Gates, I'm pretty sure his comments were restricted specifically
to the early DOS 1.0 days of the IBM PC with an 8086 processor, when
they usually came with 64K of RAM and not the whole 640K available to
it. I myself got into PCs a little later when 512K and 640K were more
the standard than the optional, and DOS was into the 3.x versions, and
even then 640K was mostly pretty luxurious -- but of course you could
see the day coming when more would be needed and quickly.

> >However, the Cell is almost present-day technology now, and it's
pretty
> >easy to see where it's going to go because it's not so far away.
> >
> Why don't you be a little more specific in your predictions, since
> they're so easy to make?

I thought I already was? Just to recap, the predictions are: no Cell in
PCs, no Cell in Macs, and no Cell will replace Nvidia or ATI GPUs.

And I'll add a couple more here. Cell might show up in a couple of IBM
supercomputers. It might even show up in an occasional IBM device, like
a NAS box.

> >> But it's processor state, not instruction sets, that's the
problem.
> >
> >What do you mean?
> >
> For itanium, the actual effect of an instruction depends on a great
> many past events that have to be kept track of (state). The op-code
> appears to act on a few registers. The actual instruction operates
on
> a space of much larger dimensionality. x86 also has state that is
> sufficiently scrambled that it's amazing that vmware can do what it
> does. The problem is *much* harder than translating instructions,
> especially if you want to take advantage of all of itanium's wigetry
> to optimize peformance. And for every interrupt, all that state has
> to be kept track of and acted upon appropriately, perhaps involving
> elaborate unwinding of provisional actions.

What, are you talking about saving registers during an interrupt?
That's all done on the stack in an x86 processor.

I'm not sure how that relates to what VMWare has to do. VMWare has to
give itself OS privileges in the CPU thus kicking the real OS down into
an emulated CPU environment, where it thinks it's still the primary
supervisor. The emulation only kicks in whenever privileged
instructions are ever executed, otherwise, they are passed straight
through normally to the processor.

Yousuf Khan
Anonymous
a b à CPUs
March 7, 2005 7:49:30 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

"Robert Myers" <rmyers1400@comcast.net> wrote in message
news:p 9go21d2i47gt35lllvgb2p9tkdmpld5nc@4ax.com...
>
> Even from my vantage point of profound ignorance, Sun's survival
> depends on lots other than Opteron, like the SCO-Linux suit, for
> example. Who knows.

Can anybody here provide a list of Sun's annual sales and profit for
the past 10 years? I suspect that might cast some light on Sun's
ability to survive.

Robert, you seem to be a lot friendlier to Dell than you used to be.
Is it OK now that Dell kicks R&D upstream, away from ~1M white-box
screwdriver shops? ;-)

For many, many years now I've known that *announced* simulators (such
as running IBM PC code on 68000s) are always really fast, but shipping
and debugged simulators are always dog-slow. I never knew why. Now,
with your explanation of processor state, I think I understand.
Thanks.

Give Keith heck. Keith needs a good taking down, and I haven't been
able to do it lately. ;-)

Felger Carbon
who still thinks Dell has the best business plan in the PC industry,
no matter what Geo McD thinks ;-)
Anonymous
a b à CPUs
March 7, 2005 7:49:31 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

In article <uC%Wd.3101$oO4.502@newsread3.news.pas.earthlink.net>,
fmsfnf@jfoops.net says...
> "Robert Myers" <rmyers1400@comcast.net> wrote in message
> news:p 9go21d2i47gt35lllvgb2p9tkdmpld5nc@4ax.com...
> >
> > Even from my vantage point of profound ignorance, Sun's survival
> > depends on lots other than Opteron, like the SCO-Linux suit, for
> > example. Who knows.
>
> Can anybody here provide a list of Sun's annual sales and profit for
> the past 10 years? I suspect that might cast some light on Sun's
> ability to survive.

I don't have ten years handy, but their annual report has five (2000-
2004).
2004 2003 2002 2001 2000
Net revenue: $11.2B $11.4B $12.5B $18.2B $15.7B
Net Income: -$.388M -$3.43B -$.587B $.927B $1.85B

<snip>

> Give Keith heck. Keith needs a good taking down, and I haven't been
> able to do it lately. ;-)

Ah, come on Felg! You can sleep later.

> Felger Carbon
> who still thinks Dell has the best business plan in the PC industry,
> no matter what Geo McD thinks ;-)

Damning with faint praise, eh? Either way, Mike has a great retirement
plan.

--
Keith
Anonymous
a b à CPUs
March 7, 2005 9:09:13 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On 7 Mar 2005 13:09:56 -0800, "YKhan" <yjkhan@gmail.com> wrote:

>Robert Myers wrote:

<snip>

>> Yousuf Khan wrote:.
>
>> >However, the Cell is almost present-day technology now, and it's
>pretty
>> >easy to see where it's going to go because it's not so far away.
>> >
>> Why don't you be a little more specific in your predictions, since
>> they're so easy to make?
>
>I thought I already was? Just to recap, the predictions are: no Cell in
>PCs, no Cell in Macs, and no Cell will replace Nvidia or ATI GPUs.
>

Well, let's see. A PC, almost by definition, uses x86 (yeah, I know,
PowerPC. Right). So, no Cell in PC's. Agreed. OTOH, you've already
got Linux on Playstation. You'll have Linux on Playstation with Cell.
No fundamental reason why you need a playstation, a TV, and a PC. In
fact, you probably only need one of the three. I cannot imagine that
Sony _isn't_ thinking that way. Why does anybody need windows when
they can web surf, do their email and maybe some wordprocessing on
their TV?

>And I'll add a couple more here. Cell might show up in a couple of IBM
>supercomputers.

Somebody will jury-rig some damn fool thing. Probably not IBM, but
how would I know? It won't be serious in this generation, but that
doesn't mean it won't ever be.

>It might even show up in an occasional IBM device, like
>a NAS box.
>

What's the payoff for IBM? The hobbyists are the pioneers and
innovators here. Let's wait and see what the crazies do first.

>> >> But it's processor state, not instruction sets, that's the
>problem.
>> >
>> >What do you mean?
>> >
>> For itanium, the actual effect of an instruction depends on a great
>> many past events that have to be kept track of (state). The op-code
>> appears to act on a few registers. The actual instruction operates
>on
>> a space of much larger dimensionality. x86 also has state that is
>> sufficiently scrambled that it's amazing that vmware can do what it
>> does. The problem is *much* harder than translating instructions,
>> especially if you want to take advantage of all of itanium's wigetry
>> to optimize peformance. And for every interrupt, all that state has
>> to be kept track of and acted upon appropriately, perhaps involving
>> elaborate unwinding of provisional actions.
>
>What, are you talking about saving registers during an interrupt?
>That's all done on the stack in an x86 processor.
>
>I'm not sure how that relates to what VMWare has to do. VMWare has to
>give itself OS privileges in the CPU thus kicking the real OS down into
>an emulated CPU environment, where it thinks it's still the primary
>supervisor. The emulation only kicks in whenever privileged
>instructions are ever executed, otherwise, they are passed straight
>through normally to the processor.
>
For a fact, I haven't a clue as to how vmware does it, because x86
doesn't trap all privileged instructions (nor does itanium, for that
matter).

Instructions act on the state of the processor, not just on registers,
and that's what you have to emulate. You can call that instruction
translation if you like, but it's not what you would naively imagine.
In the case of itanium, that state is incredibly complex because of
predicated instructions (among other things).

Instruction translation not being necessarily atomic, you have the
added problem of what to do when both the virtual and the real
processor are interrupted. It makes my head hurt just to think about
it. It would be fascinating to get a look at the interrupt code for
(say) dynamorio. I'll bet it's bear, because, of course, dynamorio is
doing real-time optimization. That's not necessarily what you
proposed, but the original idea of code morphing was to get some of
the overhead back through optimization.

RM
Anonymous
a b à CPUs
March 7, 2005 9:20:56 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Mon, 07 Mar 2005 16:49:30 GMT, "Felger Carbon" <fmsfnf@jfoops.net>
wrote:

>"Robert Myers" <rmyers1400@comcast.net> wrote in message
>news:p 9go21d2i47gt35lllvgb2p9tkdmpld5nc@4ax.com...
>>
>> Even from my vantage point of profound ignorance, Sun's survival
>> depends on lots other than Opteron, like the SCO-Linux suit, for
>> example. Who knows.
>
>Can anybody here provide a list of Sun's annual sales and profit for
>the past 10 years? I suspect that might cast some light on Sun's
>ability to survive.
>
>Robert, you seem to be a lot friendlier to Dell than you used to be.
>Is it OK now that Dell kicks R&D upstream, away from ~1M white-box
>screwdriver shops? ;-)
>
Dell customer service is horrible. Period. Maybe corporations who
buy in big quantities get good service. I didn't. Lots of others
haven't. Short of suing them, there is no recouse, and they
absolutely do not care. My local screwdriver shop, which is
admittedly a cut above average, has never let me down.

As to my being "friendlier," I'd like to think that eventually I
adjust to whatever the reality is. The reality is that Dell has
figured out how to make money on practically no margin.

Dorothy Bradbury (I think I've got her name right) suggested that Dell
runs specials the way your local supermarket runs specials: they get a
deal on a railroad car full of canned tomatoes, that week canned
tomatoes are on sale. If they can move them right away, they don't
need to make nearly as much as if they have to finance and warehouse
the inventory. Sounds plausible to me. I actually think Dell's
pricing is sneakier than that, maybe even to the extent, suggested
elsewhere, of being illegal.

>For many, many years now I've known that *announced* simulators (such
>as running IBM PC code on 68000s) are always really fast, but shipping
>and debugged simulators are always dog-slow. I never knew why. Now,
>with your explanation of processor state, I think I understand.
>Thanks.
>
I can't tell if you're being serious. I'm amazed that emulators work
at all, but then, I'm amazed that microprocessors work at all.

>Give Keith heck. Keith needs a good taking down, and I haven't been
>able to do it lately. ;-)
>

Take Keith down? Wouldn't dream of it. Rather take my old coon dog
and go out hunting bear.

RM
Anonymous
a b à CPUs
March 8, 2005 8:25:26 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Tue, 08 Mar 2005 07:06:24 -0500, Robert Myers <rmyers1400@comcast.net>
wrote:

>On Mon, 07 Mar 2005 21:29:36 -0500, George Macdonald

>>Speculation? Nope it was straight from the mfrs' in Taiwan.
>
><quote>
>
>As far as percentages go, the motherboard manufacturers unanimously
>agree that the number of AMD motherboard shipments today are higher
>than the overall 80/20 market split between AMD and Intel.
>
>The advantage is still in Intel's corner, with the highest percentage
>we were quoted being that only 30% of all motherboard shipments were
>for AMD platforms.
>
></quote>
>
>The overall split is 80/20, but some m'board mfr. is talking 30% AMD.
>What am I supposed to conclude?

Different mbrd makers supply different markets and target different price
slots. If Dell, and some other large OEMs, are still using all Intel
brand, then the brands of mbrds we know are probably going to be above the
20% on AMD. Note also the low demand for i915s - probably because the
i865/875 were a significant step forward for Intel and they got a goodly
portion of the potential market with them.

There's no doubt that Athlon64 s939s and their mbrds are in short supply
just now - NewEgg, e.g., can't seen to keep the supply of nForce4s and even
nForce3s in stock. On top of that the mbrd mfrs seem to be going after the
top end nForce4 SLI market, approaching $200./mbrd, so regular single video
card systems are shortest in supply.

>>I think the
>>blitz of ".... Technology" bulletins at IDF last week was a sign of how
>>worried Intel is. This is Intel's way of attracting attention to
>>non-events in their repertoire: remember CSA... and the Dynamic Addressing
>>in their memory controller which was part of "Acceleration Technology"?
>>Where are they now? AMD had the same damned thing as "Dynamic Addressing"
>>in the Opteron long before.... without a song & dance.
>
>Intel has been doing this kind of stuff since forever. What's
>different now?

I thought last week's flurry was particularly notable - almost desperate.

>>Why no overlap? Tight grid computing is certainly something that business
>>could/should get interested in and if there are commodity switches for the
>>job.....
>>
>People with applications big enough to require more than four
>processors easily go back and forth between cluster and SMP?

Not necessarily but if they can get some extra bang from either, why rule
them out. It's way outside my scope of expertise but apparently
distributed database works.<shrug>

>>Yes but it's no worse than the degradation for other MP systems and if you
>>have 4 or 8 working in close proximity you still have a gain. I'm not sure
>>where SUMO/NUMA is on that count but 4/8 CPUs hits a *BIG* piece of Intel's
>>current server market. Opteron also has more going for it than low latency
>>local memory.
>>
>All the advantages I can think of work best 1P, except for
>hypertransport, which has limited scalability, but rather than
>speculating, let's wait and see what anybody actually comes up with as
>a benchmark.

Yeah well I'm dying to see a comparison between the two on 64-bit. I guess
we have to wait till WinXP-64 is officially available before we see that
but I'd have thought we'd have more Linux comparisons by now. I heard that
c't had done some and published on paper... but then silence.

I think Hypertransport does OK up to maybe 8 CPUs but I dunno if that can
be arranged w/o a backplane. As for >8 you have to do some pretty fancy
footwork even with Intel CPUs *and* you don't have a standard ASIC cell
like Hypertransport to attach with.

--
Rgds, George Macdonald
March 10, 2005 1:19:06 AM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Mon, 07 Mar 2005 04:30:30 +0000, Delbert Cecchi wrote:

>
> "Yousuf Khan" <bbbl67@ezrs.com> wrote in message
> news:gOOdnfSgc9DZ5LbfRVn-3A@rogers.com...
>>
>> They do need home electronics though. The sooner they can bring PC
>> technology into the realm of home electronics the better. I'm
> surprised
>> they can't get the cost of these things down any further. They were
>> making huge strides in reducing prices until now.
>>
> I snipped it all, although I can't believe that someone educated in
> computers would be ignorant of both watson's and olson's remarks along
> with Gary Killdall flying and gates' 640k.

Well, some of us aren't quite as *old* as you are. ;-)

> I was just out at sam's club the other day, and they were selling, for
> 550 bucks retail or the cost of a nice middle of the road TV, a Compaq
> AMD system with a 17 inch flat CRT monitor (not lcd), 512MB, 180 GB
> disk (might have been 250, don't remember for sure), XP, about 8 USB
> ports, sound, etc etc. Even a little reader for the memory cards out of
> cameras right on the front.

I'm not surprised. My bet is that it was XP Home though. I built quite a
nice Athlon XP system for a friend for $400, sans OS and monitor a few
months ago.

> Computers already are into the realm of home electronics.

They have been for quite some time. I find it amazing how littttle home
electroncis costs though. I decent TV is _well_ under $500.

--
Keith
March 10, 2005 1:53:19 AM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Mon, 07 Mar 2005 08:10:30 -0500, Robert Myers wrote:

> On Sun, 06 Mar 2005 20:24:16 -0500, keith <krw@att.bizzzz> wrote:
>
>>On Sun, 06 Mar 2005 07:40:53 -0500, Robert Myers wrote:

<snip>

>>IBM never had a "huge investment" in VLIW. It was a research project, at
>>best. OTOH, Intel has a *huge* investment in VLIW, and it's a bus
>>that isn't going anywhere. It's too easy for us hardware folks to toss of
>>the hard problems to the compiler folk. History shows that this isn't a
>>good plan. Even if Intel *could* have pulled it off, where was the
>>incentive for the customers? They have a business to run and
>>processor technology isn't generally part of it.
>>
> You mean the work required to tune? People will optimize the hell out
> of compute intensive code--to a point. The work required to get the
> world-beating SpecFP numbers is probably beyond that point.

Of course. Anyone can write a compiler that works. Writing a compiler
that makes use of static parallelism in normal codes (what Intel promised)
is quite another thing. The good folks on comp.arch were laughing at the
attempt. ...yes, at the time. It had been tried too many times to assume
that Intel was going to crack the difficult nut that no one before had
come close to.

>>That was one, perhaps a big one. Intel's real problem, as I see it, is
>>that they didn't understand their customers. I've told the FS stories
>>here before. FS was doomed because the customers had no use for it and
>>they spoke *loudly*. Itanic is no different, except that Intel didn't
>>listen to their customers. They had a different agenda than their
>>customers; not a good position to be in.
>>
> If alpha and pa-risc hadn't been killed off, I might agree with you
> about Itanium. No one is going to abandon the high-end to an IBM
> monopoly. Never happen (again).

....and you think they're about to jump into Itanic (Intel's proprietary
boat) with both feet? Look in your never-mirror again.

> I gather that Future Systems eventually became AS/400.

Revisionism at its best. Yes, FS and AS/400 were/are both single-level
store machines. Yes, some of the FS architects were sent to siberia to
find a niche. ;-) Saying that FS became AS/400 is a little bit of a
stretch. Accpeting even that, AS/400 (really S/38) was in a new market.
Their customers didn't have the $kaBillions invested in software that was
the death of FS. The S/38 (and AS/400) were allowed to eat at the carcas
of DEC. ...not at the S/360 customers, who had *no* interest in it.

*THAT* is the lesson Intel hasn't learned. Their customers don't *want*
Itanic. They want x86. If Intel grew x86 into the server space, maybe.
As it is AMD is draging it (and Intel) kicking-and-screaming there.

> We'll never know
> what might have become of Itanium if it hadn't been such a committee
> enterprise. The 8080, after all, was not a particularly superior
> processor design, and nobody needed *it*, either.

Oh, please. It *was* a committee design, so that part is a matter of the
"esistence theorem". It was a flagrant attempt to kill x86, taking
that portion of the market "private", which had been cross-licensed beyond
Intel's control. Customers didn't buy in, though perhaps if it delivered
what it promised and when...

<snip>

>>> The advantages of streaming processors is low power consumption and
>>> high throughput.
>>
>>You keep saying that, but so far you're alone in the woods. Maybe for
>>the codes you're interested in, you're right. ...but for most of us
>>there are surprises in life. We don't live it linearly.
>>
> I'm definitely not alone in the woods on this one, Keith.

Not alone, but without the money to make your dreams real. I follow the
money. Were you right, Cray wouldn't need the US government's support.

> Go look at Dally's papers on Brook and Stream. Take a minute and visit gpgpu.org.
> I could dump you dozens of papers of people doing stuff other than
> graphics on stream processors, and they are doing a helluva lot of
> graphics, easily found with google, gpgpu, or by checking out siggraph
> conferences. Network processors are just another version of the same
> story. Network processors are right at the soul of mainstream
> computing, and they're going to move right onto the die.

Please. A few academic papers are making anyone (other than their
authors) any money? NPs are a special case. Show me an NP with a DP
FPU. Show me one that's making money.

> With everything having turned into point-to-point links, computers have
> turned into packet processors already. Current processing is the
> equivalent of loading a container ship by hand-loading everything into
> containers, loading them onto the container ship, and hand-unloading at
> the other end. Only a matter of time before people figure out how to
> leave things in the container for more of the trip, as the world already
> does with physical cargo.

....and you still don't like Cell? I thought you'd be creaming your jeans
over it.

> Power consumption matters. That's one point about BlueGene I've
> conceded repeatedly and loudly.

Power consumption isn't something discussed in polite conversation. ;-)
It is indeed a huge thing. Expect to see some strange things come out of
this dilemma.

> Stream processors have the disadvantage that it's a wildly different
> computing paradigm. I'd be worried if *I* had to propose and work
> through the new ways of coding. Fortunately, I don't. It's happening.

It's *not* like this is new. It's been done, yet for some reason not
enough want it to pay the freight. If it happens, fine. That only means
that someone has figured out that it's good for something. Meanwhile...

> The harder question is *why* any of this is going to happen. A lower
> power data center would be a very big deal, but nobody's going to do a
> project like that from scratch. PC's are already plenty powerful
> enough, or so the truism goes. I don't believe it, but somebody has to
> come up with the killer app, and Sony apparently thinks they have it.
> We'll see.

Didn't you just contradict yourself? ...in one paragraph?

>>I worked (tangentially) on the original TMTA product. The "proof of
>>concept" was on MS Word. Let's call it "VC irrational exuberance". Yes
>>there was lots learned there, some of it interesting, but it came at a
>>time when Dr. Moore was still quite alive. Brute force won.
>>
> On the face of it, MS word doesn't seem like it should work because of a
> huge number of unpredictable code paths. Turns out that even a word
> processing program is fairly repetitive. Do you know if they included
> exception and recovery in the analysis?

I've likely said more than I should have, but yes. ...as much as there is
in M$ Weird. IIRC it was rather well traced. Much of the work (not the
software/analysis) was done in the organization I was in, but I tried my
best to steer clear of it. Call me the original non-believer in; "and then
a miracle happens". ;-)


>>As usual, theory says that it and reality are the same. Reality has a
>>different opinion.
>
> It's still worth understanding why. The only way to make things go
> faster, beyond a certain point, is to make them predictable.

Life isn't predictable though. Predictions that turn out to be false
*waste* power. ...adn that is where we are now. We're trying to predict
tomorrow and using enough power for a year to do it.

--
Keith
March 10, 2005 2:03:00 AM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Mon, 07 Mar 2005 18:20:56 -0500, Robert Myers wrote:

> On Mon, 07 Mar 2005 16:49:30 GMT, "Felger Carbon" <fmsfnf@jfoops.net>
> wrote:
>
>>Give Keith heck. Keith needs a good taking down, and I haven't been
>>able to do it lately. ;-)
>>
>
> Take Keith down? Wouldn't dream of it. Rather take my old coon dog
> and go out hunting bear.

Oh, you've met me?

Story time: A (rather attractive) bar tender once told me that a friend
was taking her out to hunt "bear". Says I (knowing exactly what she
*said* and meant), "I want to see *that*". She was rather taken aback that
I would doubt her hunting abilities. Says I, "nope, I just want to see
you hunting bare". After ten minutes or so (and keeping the entire
barroom in stiches) I did have to explain homophones. I would have gotten
slapped, but she new that I'd have enjoyed it too much. ;-)

--
Keith
Anonymous
a b à CPUs
March 10, 2005 10:23:33 AM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Wed, 09 Mar 2005 22:53:19 -0500, keith <krw@att.bizzzz> wrote:

>On Mon, 07 Mar 2005 08:10:30 -0500, Robert Myers wrote:
>
>> On Sun, 06 Mar 2005 20:24:16 -0500, keith <krw@att.bizzzz> wrote:
>>
>>>On Sun, 06 Mar 2005 07:40:53 -0500, Robert Myers wrote:
>
> <snip>
>
>>>IBM never had a "huge investment" in VLIW. It was a research project, at
>>>best. OTOH, Intel has a *huge* investment in VLIW, and it's a bus
>>>that isn't going anywhere. It's too easy for us hardware folks to toss of
>>>the hard problems to the compiler folk. History shows that this isn't a
>>>good plan. Even if Intel *could* have pulled it off, where was the
>>>incentive for the customers? They have a business to run and
>>>processor technology isn't generally part of it.
>>>
>> You mean the work required to tune? People will optimize the hell out
>> of compute intensive code--to a point. The work required to get the
>> world-beating SpecFP numbers is probably beyond that point.
>
>Of course. Anyone can write a compiler that works. Writing a compiler
>that makes use of static parallelism in normal codes (what Intel promised)
>is quite another thing.

By "static parallelism" I think you mean compile-time scheduling to
exploit parallelism.

>The good folks on comp.arch were laughing at the
>attempt. ...yes, at the time. It had been tried too many times to assume
>that Intel was going to crack the difficult nut that no one before had
>come close to.

I think I've said this before, and maybe even to this exact point: If
you think a problem can be done, and there is no plausible
demonstration that it can't be done (e.g. the computational complexity
of the game "go"), then it is an unsolved problem, not an impossible
problem.

How to handicap levels of implausibility? Compared to what is
actually accomplished in silicon, building the required compiler seems
like a plausible bet.

We all have our hobby horses: yours is latency, mine is predictability
(and whatever bandwidth is necessary to exploit it). Understanding
and exploiting predictability is ultimately a bigger win than any
other possible advance in computation that I know of. A poster to
comp.arch suggested, not entirely seriously, that working around the
latency of a cluster shouldn't be any harder than working around the
memory wall and that we should be using the same kinds of strategies
(OoO, prefetch, cache, speculative computatio). Whether he intended
his remark to be taken seriously or not, we will eventually need that
level of sophistication, and it all comes down to the same thing:
understanding the predictability of a computation well enough to be
able to reach far enough into the future to beat latency.

Yes, I just made the problem even harder. Until there is a
demonstration that it can't be done, it is an unsolved problem, not an
impossible problem.

The business prospects of Intel in the meantime? I think they're mean
enough to survive.

>
>>>That was one, perhaps a big one. Intel's real problem, as I see it, is
>>>that they didn't understand their customers. I've told the FS stories
>>>here before. FS was doomed because the customers had no use for it and
>>>they spoke *loudly*. Itanic is no different, except that Intel didn't
>>>listen to their customers. They had a different agenda than their
>>>customers; not a good position to be in.
>>>
>> If alpha and pa-risc hadn't been killed off, I might agree with you
>> about Itanium. No one is going to abandon the high-end to an IBM
>> monopoly. Never happen (again).
>
>...and you think they're about to jump into Itanic (Intel's proprietary
>boat) with both feet? Look in your never-mirror again.
>

Power is not proprietary? Only one company builds boxes with Power.
Many will build boxes with itanium.

>> I gather that Future Systems eventually became AS/400.
>
>Revisionism at its best. Yes, FS and AS/400 were/are both single-level
>store machines. Yes, some of the FS architects were sent to siberia to
>find a niche. ;-) Saying that FS became AS/400 is a little bit of a
>stretch. Accpeting even that, AS/400 (really S/38) was in a new market.
>Their customers didn't have the $kaBillions invested in software that was
>the death of FS. The S/38 (and AS/400) were allowed to eat at the carcas
>of DEC. ...not at the S/360 customers, who had *no* interest in it.
>
>*THAT* is the lesson Intel hasn't learned. Their customers don't *want*
>Itanic. They want x86. If Intel grew x86 into the server space, maybe.
>As it is AMD is draging it (and Intel) kicking-and-screaming there.

Intel is certainly not happy about the success of Opteron.

<snip>

>
>>>> The advantages of streaming processors is low power consumption and
>>>> high throughput.
>>>
>>>You keep saying that, but so far you're alone in the woods. Maybe for
>>>the codes you're interested in, you're right. ...but for most of us
>>>there are surprises in life. We don't live it linearly.
>>>
>> I'm definitely not alone in the woods on this one, Keith.
>
>Not alone, but without the money to make your dreams real. I follow the
>money. Were you right, Cray wouldn't need the US government's support.
>
Cray has not much of anything to do with anything at this point.
Another national lab poodle. And I think you just moved the
goalposts.

>> Go look at Dally's papers on Brook and Stream. Take a minute and visit gpgpu.org.
>> I could dump you dozens of papers of people doing stuff other than
>> graphics on stream processors, and they are doing a helluva lot of
>> graphics, easily found with google, gpgpu, or by checking out siggraph
>> conferences. Network processors are just another version of the same
>> story. Network processors are right at the soul of mainstream
>> computing, and they're going to move right onto the die.
>
>Please. A few academic papers are making anyone (other than their
>authors) any money? NPs are a special case. Show me an NP with a DP
>FPU. Show me one that's making money.
>
The fundamentals in favor of streaming computation in terms of power
consumption are just overwhelming, and they become more so as scale
sizes shrink.

>> With everything having turned into point-to-point links, computers have
>> turned into packet processors already. Current processing is the
>> equivalent of loading a container ship by hand-loading everything into
>> containers, loading them onto the container ship, and hand-unloading at
>> the other end. Only a matter of time before people figure out how to
>> leave things in the container for more of the trip, as the world already
>> does with physical cargo.
>
>...and you still don't like Cell? I thought you'd be creaming your jeans
>over it.
>
Who ever said I didn't like Cell? It doesn't do standard floating
point arithmetic and it isn't really designed for double precision
floating point arithmetic, but Cell or a Cell derivative could
revolutionize computation.

>> Power consumption matters. That's one point about BlueGene I've
>> conceded repeatedly and loudly.
>
>Power consumption isn't something discussed in polite conversation. ;-)

I see it discussed more and more. Blades have become more powerful
and they've become less unreasonable in price, but the resulting power
density creates a different problem for data centers.

>It is indeed a huge thing. Expect to see some strange things come out of
>this dilemma.
>
>> Stream processors have the disadvantage that it's a wildly different
>> computing paradigm. I'd be worried if *I* had to propose and work
>> through the new ways of coding. Fortunately, I don't. It's happening.
>
>It's *not* like this is new. It's been done, yet for some reason not
>enough want it to pay the freight. If it happens, fine. That only means
>that someone has figured out that it's good for something. Meanwhile...
>
>> The harder question is *why* any of this is going to happen. A lower
>> power data center would be a very big deal, but nobody's going to do a
>> project like that from scratch. PC's are already plenty powerful
>> enough, or so the truism goes. I don't believe it, but somebody has to
>> come up with the killer app, and Sony apparently thinks they have it.
>> We'll see.
>
>Didn't you just contradict yourself? ...in one paragraph?
>
Don't know where you think the apparent contradiction is. An argument
could be made that VisiCalc made the PC. Whether that's exactly true
or not, VisiCalc made the usefulness of a PC as anything but a very
expensive typewriter immediately obvious.

I'm betting that the applications for streaming computation will come.
Whether it is Sony and Cell that make the breakthrough and that it is
imminent is less clear than that the breakthrough will come.

RM
March 12, 2005 7:32:52 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Thu, 10 Mar 2005 07:23:33 -0500, Robert Myers wrote:

> On Wed, 09 Mar 2005 22:53:19 -0500, keith <krw@att.bizzzz> wrote:
>
>>On Mon, 07 Mar 2005 08:10:30 -0500, Robert Myers wrote:
>>
>>> On Sun, 06 Mar 2005 20:24:16 -0500, keith <krw@att.bizzzz> wrote:
>>>
>>>>On Sun, 06 Mar 2005 07:40:53 -0500, Robert Myers wrote:
>>
>> <snip>
>>
>>>>IBM never had a "huge investment" in VLIW. It was a research project, at
>>>>best. OTOH, Intel has a *huge* investment in VLIW, and it's a bus
>>>>that isn't going anywhere. It's too easy for us hardware folks to toss of
>>>>the hard problems to the compiler folk. History shows that this isn't a
>>>>good plan. Even if Intel *could* have pulled it off, where was the
>>>>incentive for the customers? They have a business to run and
>>>>processor technology isn't generally part of it.
>>>>
>>> You mean the work required to tune? People will optimize the hell out
>>> of compute intensive code--to a point. The work required to get the
>>> world-beating SpecFP numbers is probably beyond that point.
>>
>>Of course. Anyone can write a compiler that works. Writing a compiler
>>that makes use of static parallelism in normal codes (what Intel promised)
>>is quite another thing.
>
> By "static parallelism" I think you mean compile-time scheduling to
> exploit parallelism.

Yes. Static as in; "will never change", rather than dynamic; "will
change, whether we want it to or not".

>>The good folks on comp.arch were laughing at the
>>attempt. ...yes, at the time. It had been tried too many times to assume
>>that Intel was going to crack the difficult nut that no one before had
>>come close to.
>
> I think I've said this before, and maybe even to this exact point: If
> you think a problem can be done, and there is no plausible
> demonstration that it can't be done (e.g. the computational complexity
> of the game "go"), then it is an unsolved problem, not an impossible
> problem.

True. I'd say it's an "academic" problem, so leave it to the
academics. Meanwhile, I'll make money with what's known to be solveable.
I'm not about to bet my life's savings (company, were I CEO) on something
that has shown itself to tbe an intractable problem for several decades.

> How to handicap levels of implausibility? Compared to what is actually
> accomplished in silicon, building the required compiler seems like a
> plausible bet.

Doesn't to me! Better minds than mine have tried and failed. Intel
proved once again that it was a *hard* problem.

> We all have our hobby horses: yours is latency, mine is predictability
> (and whatever bandwidth is necessary to exploit it). Understanding and
> exploiting predictability is ultimately a bigger win than any other
> possible advance in computation that I know of.

Understanding that the world isn't predictable leads one to not waste
efort looking down that path.

> A poster to comp.arch
> suggested, not entirely seriously, that working around the latency of a
> cluster shouldn't be any harder than working around the memory wall and
> that we should be using the same kinds of strategies (OoO, prefetch,
> cache, speculative computatio). Whether he intended his remark to be
> taken seriously or not, we will eventually need that level of
> sophistication, and it all comes down to the same thing: understanding
> the predictability of a computation well enough to be able to reach far
> enough into the future to beat latency.
>
> Yes, I just made the problem even harder. Until there is a
> demonstration that it can't be done, it is an unsolved problem, not an
> impossible problem.

You bet your life's savings. I'll pass.

> The business prospects of Intel in the meantime? I think they're mean
> enough to survive.

Survive, sure. I have no doubt about Intel's survival, but they have
pi$$ed away ten digits of their owner's money, while letting #2
define the next architecture.


<snip>

>>...and you think they're about to jump into Itanic (Intel's proprietary
>>boat) with both feet? Look in your never-mirror again.
>>
>>
> Power is not proprietary? Only one company builds boxes with Power.
> Many will build boxes with itanium.

I didn't say it wasn't. You implied that Itanic was somehow less
proprietary. Actually, Power isn't proprietary. There are others in the
business. ...heard of Motorola?

>>> I gather that Future Systems eventually became AS/400.
>>
>>Revisionism at its best. Yes, FS and AS/400 were/are both single-level
>>store machines. Yes, some of the FS architects were sent to siberia to
>>find a niche. ;-) Saying that FS became AS/400 is a little bit of a
>>stretch. Accpeting even that, AS/400 (really S/38) was in a new market.
>>Their customers didn't have the $kaBillions invested in software that
>>was the death of FS. The S/38 (and AS/400) were allowed to eat at the
>>carcas of DEC. ...not at the S/360 customers, who had *no* interest in
>>it.
>>
>>*THAT* is the lesson Intel hasn't learned. Their customers don't *want*
>>Itanic. They want x86. If Intel grew x86 into the server space, maybe.
>>As it is AMD is draging it (and Intel) kicking-and-screaming there.
>
> Intel is certainly not happy about the success of Opteron.

{{{{BING}}}}

We have the winner for understatement of the year! ;-)

>>>>> The advantages of streaming processors is low power consumption and
>>>>> high throughput.
>>>>
>>>>You keep saying that, but so far you're alone in the woods. Maybe for
>>>>the codes you're interested in, you're right. ...but for most of us
>>>>there are surprises in life. We don't live it linearly.
>>>>
>>> I'm definitely not alone in the woods on this one, Keith.
>>
>>Not alone, but without the money to make your dreams real. I follow the
>>money. Were you right, Cray wouldn't need the US government's support.
>>
> Cray has not much of anything to do with anything at this point. Another
> national lab poodle. And I think you just moved the goalposts.

At this point? They *got* to this point by playing a role in your dreams.
I haven't moved *anything*. You love Crayish architectures. I love
busiesses that make sense. Computers are no longer a toy for me.
They're a means to an end. I really don't care what architecture wins.

>>> Go look at Dally's papers on Brook and Stream. Take a minute and
>>> visit gpgpu.org. I could dump you dozens of papers of people doing
>>> stuff other than graphics on stream processors, and they are doing a
>>> helluva lot of graphics, easily found with google, gpgpu, or by
>>> checking out siggraph conferences. Network processors are just
>>> another version of the same story. Network processors are right at
>>> the soul of mainstream computing, and they're going to move right onto
>>> the die.
>>
>>Please. A few academic papers are making anyone (other than their
>>authors) any money? NPs are a special case. Show me an NP with a DP
>>FPU. Show me one that's making money.
>>
> The fundamentals in favor of streaming computation in terms of power
> consumption are just overwhelming, and they become more so as scale
> sizes shrink.

Let me repeat; "you keep saying this", but if the problems can't be solved
by streaming they don't save any power at all. The universe of problems
that are solvable by streaming is on the order of the size of
"embarrasingly parallel" problems that can be solved with an array of a
kabillion 8051s.

>>> With everything having turned into point-to-point links, computers
>>> have turned into packet processors already. Current processing is the
>>> equivalent of loading a container ship by hand-loading everything into
>>> containers, loading them onto the container ship, and hand-unloading
>>> at the other end. Only a matter of time before people figure out how
>>> to leave things in the container for more of the trip, as the world
>>> already does with physical cargo.
>>
>>...and you still don't like Cell? I thought you'd be creaming your
>>jeans over it.
>>
> Who ever said I didn't like Cell? It doesn't do standard floating point
> arithmetic and it isn't really designed for double precision floating
> point arithmetic, but Cell or a Cell derivative could revolutionize
> computation.

I though you were one of the nay--sayers, like Felger. ;-)

>>> Power consumption matters. That's one point about BlueGene I've
>>> conceded repeatedly and loudly.
>>
>>Power consumption isn't something discussed in polite conversation. ;-)
>
> I see it discussed more and more. Blades have become more powerful and
> they've become less unreasonable in price, but the resulting power
> density creates a different problem for data centers.

Note the smiley. I'd really like to go here, but I don't know where the
confidentiality edge is, so... Let me just say that you aren't the
only one noticing these things.

<snip>

>>> The harder question is *why* any of this is going to happen. A lower
>>> power data center would be a very big deal, but nobody's going to do a
>>> project like that from scratch. PC's are already plenty powerful
>>> enough, or so the truism goes. I don't believe it, but somebody has
>>> to come up with the killer app, and Sony apparently thinks they have
>>> it. We'll see.
>>
>>Didn't you just contradict yourself? ...in one paragraph?
>>
> Don't know where you think the apparent contradiction is.

After re-reading the paragraph, I must have read it wrong at first.
Perhaps your dual use (and contradictory) use of "power" threw me off.

> An argument
> could be made that VisiCalc made the PC. Whether that's exactly true or
> not, VisiCalc made the usefulness of a PC as anything but a very
> expensive typewriter immediately obvious.

Ok, but I'd argue that it was a worthwhile business machine even if it
were onle an expensive typewriter (and gateway into the mainframe and
later the network).

> I'm betting that the applications for streaming computation will come.
> Whether it is Sony and Cell that make the breakthrough and that it is
> imminent is less clear than that the breakthrough will come.

How much? What areas of computing?

--
Keith
Anonymous
a b à CPUs
March 12, 2005 9:54:12 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Sat, 12 Mar 2005 16:32:52 -0500, keith <krw@att.bizzzz> wrote:

>On Thu, 10 Mar 2005 07:23:33 -0500, Robert Myers wrote:
>
>> On Wed, 09 Mar 2005 22:53:19 -0500, keith <krw@att.bizzzz> wrote:
>>
>>>On Mon, 07 Mar 2005 08:10:30 -0500, Robert Myers wrote:
>>>
>>>> On Sun, 06 Mar 2005 20:24:16 -0500, keith <krw@att.bizzzz> wrote:
>>>>
>>>>>On Sun, 06 Mar 2005 07:40:53 -0500, Robert Myers wrote:
>>>

<snip>

>
>>>>>> The advantages of streaming processors is low power consumption and
>>>>>> high throughput.
>>>>>
>>>>>You keep saying that, but so far you're alone in the woods. Maybe for
>>>>>the codes you're interested in, you're right. ...but for most of us
>>>>>there are surprises in life. We don't live it linearly.
>>>>>
>>>> I'm definitely not alone in the woods on this one, Keith.
>>>
>>>Not alone, but without the money to make your dreams real. I follow the
>>>money. Were you right, Cray wouldn't need the US government's support.
>>>
>> Cray has not much of anything to do with anything at this point. Another
>> national lab poodle. And I think you just moved the goalposts.
>
>At this point? They *got* to this point by playing a role in your dreams.
>I haven't moved *anything*. You love Crayish architectures. I love
>busiesses that make sense. Computers are no longer a toy for me.
>They're a means to an end. I really don't care what architecture wins.
>
I can't think of something appropriately compact and eloquent to say
in reply. Of course a computer has to make business sense. I, and
others, have argued for Cray-type machines because relative amateurs
can hack decent code for them. That makes them good for scientific
computation in more ways than one. You can't easily hack and you
can't easily debug cluster code. Cray-type machines also tend to have
decent bisection bandwidth, a department in which Blue Gene, at least
as installed at LLNL, is pathetic.

>>>> Go look at Dally's papers on Brook and Stream. Take a minute and
>>>> visit gpgpu.org. I could dump you dozens of papers of people doing
>>>> stuff other than graphics on stream processors, and they are doing a
>>>> helluva lot of graphics, easily found with google, gpgpu, or by
>>>> checking out siggraph conferences. Network processors are just
>>>> another version of the same story. Network processors are right at
>>>> the soul of mainstream computing, and they're going to move right onto
>>>> the die.
>>>
>>>Please. A few academic papers are making anyone (other than their
>>>authors) any money? NPs are a special case. Show me an NP with a DP
>>>FPU. Show me one that's making money.
>>>
>> The fundamentals in favor of streaming computation in terms of power
>> consumption are just overwhelming, and they become more so as scale
>> sizes shrink.
>
>Let me repeat; "you keep saying this", but if the problems can't be solved
>by streaming they don't save any power at all. The universe of problems
>that are solvable by streaming is on the order of the size of
>"embarrasingly parallel" problems that can be solved with an array of a
>kabillion 8051s.
>
We really don't know at this point. There was a long thread on
comp.arch, last summer I think, where we thrashed through a proposed
register-transfer architecture from someone who didn't really know
what he was doing (remember the WIZ processor architecture?). It got
weird enough to pull John Mashey out of the woodwork.

At one point, the thread attracted one Nicholas (sp?) Capens, who had
written some shader code to which he provided links. He talked about
"straightening out the kinks" so you could stream code. Those are the
right words. It's hard to get a compiler to do that sort of thing,
although compilers have mostly learned how to do what I could do as a
Cray Fortran programmer, but what I knew how to do is far from
exhausting what is possible. Using the vector mask register to merge
two streams when you don't know which of two results to use is an
example of streaming a computation that doesn't stream without some
trickery.

People *are* doing that kind of stuff with shaders right now, and some
are playing around with doing things other than graphics that way.

There is no general set of transformations you can perform, no general
theory of coding to say what is possible. Putting a compiler to work
on naive c or Fortran and expecting it to make everything suitable for
a stream processor is a non-starter, but we don't yet know what are
the real limits of human cleverness. I think we have barely scratched
the surface.

As to embarrassingly parallel, I think the Kasparov chess match
provided one example of what is possible, and, again, we don't really
know what people will do when arbitrarily large numbers of
embarrassingly parallel operations are almost free.

>>>> With everything having turned into point-to-point links, computers
>>>> have turned into packet processors already. Current processing is the
>>>> equivalent of loading a container ship by hand-loading everything into
>>>> containers, loading them onto the container ship, and hand-unloading
>>>> at the other end. Only a matter of time before people figure out how
>>>> to leave things in the container for more of the trip, as the world
>>>> already does with physical cargo.
>>>
>>>...and you still don't like Cell? I thought you'd be creaming your
>>>jeans over it.
>>>
>> Who ever said I didn't like Cell? It doesn't do standard floating point
>> arithmetic and it isn't really designed for double precision floating
>> point arithmetic, but Cell or a Cell derivative could revolutionize
>> computation.
>
>I though you were one of the nay--sayers, like Felger. ;-)
>

I picture Felger staying warm in winter with his racks of vacuum-tube
logic. ;-).

>>>> Power consumption matters. That's one point about BlueGene I've
>>>> conceded repeatedly and loudly.
>>>
>>>Power consumption isn't something discussed in polite conversation. ;-)
>>
>> I see it discussed more and more. Blades have become more powerful and
>> they've become less unreasonable in price, but the resulting power
>> density creates a different problem for data centers.
>
>Note the smiley. I'd really like to go here, but I don't know where the
>confidentiality edge is, so... Let me just say that you aren't the
>only one noticing these things.
>

I had assumed so.

<snip>

>
>> I'm betting that the applications for streaming computation will come.
>> Whether it is Sony and Cell that make the breakthrough and that it is
>> imminent is less clear than that the breakthrough will come.
>
>How much? What areas of computing?

Isn't 10x the standard for what constitutes a breakthrough?

1. Image processing (obvious)
2. Graphics (obvious)
3. Physics for games (obvious, discussed in another thread)
4. Physics for proteins (obvious, the only question is how big an
application it is and how much difference it will make).
5. Brute force searching (not obvious, outside my area of competence,
really)
6. Monte Carlo (any problem can be made embarrassingly parallel that
way). Financial markets are an obvious application.
7. Information retrieval (n-grams, cluster analysis and such stuff,
outside my area of competence).
8. Bioinformatics (outside my area of competence).

There's more, I'm sure, but that should be a start.

RM
!