Sign in with
Sign up | Sign in
Your question

Might be a book that even R. Myers can love :-)

Last response: in CPUs
Share
Anonymous
a b à CPUs
June 15, 2004 8:44:54 AM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Jim Carlson and Jerry Huck's "Itanium Rising" book as described in the
following article:

http://www.shannonknowshpc.com/stories.php?story=04/06/...

Yousuf Khan

--
Humans: contact me at ykhan at rogers dot com
Spambots: just reply to this email address ;-)

More about : book myers love

Anonymous
a b à CPUs
June 15, 2004 5:53:46 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Yousuf Khan wrote:

> Jim Carlson and Jerry Huck's "Itanium Rising" book as described in the
> following article:
>
> http://www.shannonknowshpc.com/stories.php?story=04/06/...
>

I made a post to comp.arch about this book with the subject line
"Stupefying hubris from Intel/HP about Itanium" to comp.arch on March 1,
2003. Del Cecchi had previously pointed out the existence of the book
on October 2, 2002, but it's not clear that anyone but me read (well at
least skimmed) it. In any case, I was the only one to say anything
subtantive about the book's contents. If I weren't so interested in
self-description (what people and companies say about themselves and
why), I would have been tremendously annoyed at the book.

Even leaving aside some reasonably subtle technological questions (some
of which have been discussed on comp.arch), _Itanium_Rising_ can't tell
the really interesting story because its authors would probably get
fired for even trying to tell it. Aside from building a processor with
an ISA that wouldn't be subject to any of Intel's cross-licensing
agreements, what did the principals in this drama really think they were
buying into? Even the history of the internal presentations that were
made would be fascinating. What did they think they knew, and when did
they think they knew it? ;-)

RM
Anonymous
a b à CPUs
June 16, 2004 7:36:34 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Robert Myers <rmyers1400@comcast.net> wrote:
> I made a post to comp.arch about this book with the subject line
> "Stupefying hubris from Intel/HP about Itanium" to comp.arch on March
> 1, 2003. Del Cecchi had previously pointed out the existence of the
> book on October 2, 2002, but it's not clear that anyone but me read
> (well at least skimmed) it. In any case, I was the only one to say
> anything subtantive about the book's contents. If I weren't so
> interested in self-description (what people and companies say about
> themselves and why), I would have been tremendously annoyed at the
> book.

Okay, then maybe R. Myers might not like this book. :-)

Yousuf Khan
Anonymous
a b à CPUs
June 16, 2004 8:29:29 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Yousuf Khan wrote:
> Robert Myers <rmyers1400@comcast.net> wrote:
>
>> ... If I weren't so
>>interested in self-description (what people and companies say about
>>themselves and why), I would have been tremendously annoyed at the
>>book.
>
>
> Okay, then maybe R. Myers might not like this book. :-)
>

But I am intensely interested in self-description, I'm interested in
what makes grand technical initiatives succeed or fail, and I think it's
more interesting to say what you think is going on before all the horses
have crossed the finish line than it is to be a smug historian. :-).

Like the book or not like is more or less beside the point. I was
stunned that Intel/HP let such a daringly unrepentant bit of
self-promotion see daylight in view of the disappointing performance of
Itanium, but even that (fairly reasonable, I think) reaction is a
distraction. The fact is that the book _was_ published, and without
corporate gloss or apologia, as far as I know: just, here it is, the
most wonderful processor ever, just as we said it would be.

Leave out the technical issues. If Intel/HP have to climb down from the
fortress they've built around Itanium, how will they ever pull it off?
It would be like IBM admitting that maybe System 360 wasn't such a great
idea, after all (which, who knows, maybe it wasn't).

RM
Anonymous
a b à CPUs
June 16, 2004 8:29:30 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Wed, 16 Jun 2004 16:29:29 GMT, Robert Myers <rmyers1400@comcast.net>
wrote:

>Leave out the technical issues. If Intel/HP have to climb down from the
>fortress they've built around Itanium, how will they ever pull it off?
>It would be like IBM admitting that maybe System 360 wasn't such a great
>idea, after all (which, who knows, maybe it wasn't).

Wasn't it "Stretch" which wasn't a good idea... preceding S/360. I guess
there was a lot of folklore back then too.:-)

Rgds, George Macdonald

"Just because they're paranoid doesn't mean you're not psychotic" - Who, me??
Anonymous
a b à CPUs
June 16, 2004 10:19:39 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Robert Myers <rmyers1400@comcast.net> wrote:
> Leave out the technical issues. If Intel/HP have to climb down from the
> fortress they've built around Itanium, how will they ever pull it off?
> It would be like IBM admitting that maybe System 360 wasn't such a great
> idea, after all (which, who knows, maybe it wasn't).

Whatever one thinks about the technical merits of S/360,
the commercial success was undeniable. The same could be
said of x86.

I very much doubt that IA64 (Itanium) will ever enjoy such
commercial success. It's more likely to go down that path
heavily travelled by Intel after 432 and i860.

-- Robert
Anonymous
a b à CPUs
June 17, 2004 12:55:26 AM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

George Macdonald wrote:

> On Wed, 16 Jun 2004 16:29:29 GMT, Robert Myers <rmyers1400@comcast.net>
> wrote:
>
>
>>Leave out the technical issues. If Intel/HP have to climb down from the
>>fortress they've built around Itanium, how will they ever pull it off?
>>It would be like IBM admitting that maybe System 360 wasn't such a great
>>idea, after all (which, who knows, maybe it wasn't).
>
>
> Wasn't it "Stretch" which wasn't a good idea... preceding S/360. I guess
> there was a lot of folklore back then too.:-)
>

Stretch just precedes my actually becoming conscious of computers in any
but the most theoretical of ways, so I know only the written record--no
folklore. Branch prediction, speculation, out of order execution,
hardware prefetch, and fused FMAC: less than two hundred thousand
transistors. What's not to like? :-).

RM
Anonymous
a b à CPUs
June 17, 2004 1:35:38 AM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Robert Redelmeier wrote:

>
> Whatever one thinks about the technical merits of S/360,
> the commercial success was undeniable. The same could be
> said of x86.
>
> I very much doubt that IA64 (Itanium) will ever enjoy such
> commercial success. It's more likely to go down that path
> heavily travelled by Intel after 432 and i860.
>

You may be right. I'm just having a really hard time imagining how this
is going to go down. Intel's goal is to 360-ize as much enterprise code
as it can (only for IA64, obviously). How well they are doing that, how
well they can do that, is something that I just cannot judge, although
it is apparent things are not going according to plan at the moment.

George Macdonald said something about his own experience with Alpha (and
by extension, with Itanium) that sounded absolutely pivotal: software
developers don't want to develop for a platform that isn't going to make
them money.

That's easy, you're thinking: pass on Itanium. Not so fast, buckaroo.
One future: x86-64=Open source, low rent, lots of volume, no margin.
Itanium=Proprietary, expensive, low volume, high margin. If you look at
the Intel branding ads aimed at corporate decision-makers, Xeon is
"productivity" and Itanium is "enterprise." Who wants to be
"productivity" when one could be "enterprise?" Especially if the "who"
is a manager, to whom "productivity" is something that is delivered by
nameless underlings. :-).

They'll never get away with it, you're sputtering, and you may be right,
but Intel shows no signs of abandoning its strategy.

RM
Anonymous
a b à CPUs
June 17, 2004 1:48:10 AM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Robert Restampedesswrote:

> Robert Myers <rmyers1400@comcast.net> wrote:
>> Leave out the technical issues. If Intel/HP have to climb down
>> from the fortress they've built around Itanium, how will they
>> ever pull it off? It would be like IBM admitting that maybe
>> System 360 wasn't such a great idea, after all (which, who knows,
>> maybe it wasn't).
>
> Whatever one thinks about the technical merits of S/360,
> the commercial success was undeniable. The same could be
> said of x86.
>

Undeniable, indeed. That's *MY* basis for supporting K8.

IBM tried to kill S/360 with 'FS', just as Intel has tried to kill
x86 with Itanic. Both companies did it for internal reasons,
totally disregarding customer's wishes. "Amazingly", neither went
over well with the customer set. Both 3x0 and x86 are still with
us (after forty and twenty years, respectively). ...evolution over
revolution.

> I very much doubt that IA64 (Itanium) will ever enjoy such
> commercial success. It's more likely to go down that path
> heavily travelled by Intel after 432 and i860.

....as AMD64 stampedes the mess Intel's tried to create. History
does repeat. Those who try to harness history will make money.
Those who try to harness their customers deserve an ugly death.

--
Keith
Anonymous
a b à CPUs
June 17, 2004 2:17:17 AM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Robert Myers <rmyers1400@comcast.net> wrote in part:
> One future: x86-64=Open source, low rent, lots of volume, no margin.
> Itanium=Proprietary, expensive, low volume, high margin.

Both entirely true.

> If you look at the Intel branding ads aimed at corporate
> decision-makers, Xeon is "productivity" and Itanium is "enterprise."
> Who wants to be "productivity" when one could be "enterprise?"

I doubt even the dinosaur brains will swallow that swill.

Two words: "second source". No-one wants to be dependant on a
single supplier. PC vs Mac. S/360 succeeded mostly by offering
a uniform platform with a promise of continuation (backward
compatibility) that attracted development.

x86/WinNT (maybe Linux) holds that position now. What compelling
argument is available for IA64? What performance? 64 bits is
available ~painlessly with x86-64.

> They'll never get away with it, you're sputtering, and you may
> be right, but Intel shows no signs of abandoning its strategy.

I don't sputter. I know the market will decide. Mistakes get
punished, and stubborn fools commensurately.

-- Robert
Anonymous
a b à CPUs
June 17, 2004 2:37:43 AM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Wed, 16 Jun 2004 18:19:39 GMT, Robert Redelmeier <redelm@ev1.net.invalid>
wrote:

>Robert Myers <rmyers1400@comcast.net> wrote:
>> Leave out the technical issues. If Intel/HP have to climb down from the
>> fortress they've built around Itanium, how will they ever pull it off?
>> It would be like IBM admitting that maybe System 360 wasn't such a great
>> idea, after all (which, who knows, maybe it wasn't).
>
>Whatever one thinks about the technical merits of S/360,
>the commercial success was undeniable.

I think the technical merits were right up there as well.
What other system had a control store that required an air-pump to operate?;-)

/daytripper
Anonymous
a b à CPUs
June 17, 2004 7:04:34 AM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

"K Williams" <krw@att.biz> wrote in message
news:BvWdnVArbK4Kak3dRVn-sw@adelphia.com...
> IIRC the Cyber6600 came significantly after April '64 too.

According to the first sentence of Chapt 43 of Sieworek, Bell, and
Newell's "Computer Structures: Principles and Examples", the first
6600 was delivered in Oct 64. The 6600 project was begun in the
summer of 1960.
Anonymous
a b à CPUs
June 17, 2004 8:49:37 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

George Macdonald wrote:

>
> I guess it's likely folklore but I know that when the 7074 was to be
> replaced in a certain office of a multinational corp in 1967, the S/360 was
> the obvious and natural replacement for the DP side of things; OTOH there
> was serious consideration given to Univac 1108 or CDC 6600 for technical &
> scientific work, which had often been done on a 7094... and often at
> extortionate time-lease terms. IOW it wasn't clear that the S/360 could
> hack it for the latter - turned out that it was dreadfully slow but near
> tolerable... if people worked late:-( and got much better later. Certainly
> the performance of S/360 fell way short of expected performance as "sold" -
> I can bore you with the details if you wish.:-)
>
> The CDC 6000 Series didn't become Cyber Series till ~1972[hazy]; before
> that there was 6200, 6400, 6500 and 6600... and there was the notorious
> 7600 in between. Dates of working hardware are difficult to pin down -
> supposedly secret confidential data often went astray and announced
> availability and deliverable were umm, fungible. The story is probably a
> bit folklorish but no doubt that IBM was seriously threatened by Univac and
> CDC in the technical computing arena.
>

Threatened? :-). The outlines of the folklore you report is the
folklore I started my career with: CDC (later Cray) for hydro codes, IBM
for W-2 forms. Lynn Wheeler's posts to comp.arch have helped me to
understand how it was that IBM sold machines at all, because, as far as
I could tell, they were expensive and slow, JCL was descended from some
language used in Mordor, and the batch system was designed for people
who knew ahead of time what resources a job would need (that is to say,
it was designed for people counting W-2 forms and not for people doing
research). My impression of IBM sofware was fixed by my early
experience with the Scientific Subroutine Package, and even the
compilers were buggy for the kinds of things I wanted to use--no problem
for financial applications, where there was (as yet) no requirement for
double precision complex arithmetic.

One is tempted to summarize the Stretch/360 experiences as: "How IBM
learned to love banks and to hate the bomb." In retrospect, IBM's
misadventure with Stretch might be regarded as a stroke of luck. An
analyst too close to the action might have regarded IBM's being pushed
out of technical computing in the days of the Space Race as a distaster,
but the heady days of "If it's technical, it must be worth doing" were
actually over, and IBM was in the more lucrative line of business, anyway.

RM
Anonymous
a b à CPUs
June 17, 2004 8:49:38 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Thu, 17 Jun 2004 16:49:37 GMT, Robert Myers <rmyers1400@comcast.net>
wrote:

>George Macdonald wrote:
>
>>
>> I guess it's likely folklore but I know that when the 7074 was to be
>> replaced in a certain office of a multinational corp in 1967, the S/360 was
>> the obvious and natural replacement for the DP side of things; OTOH there
>> was serious consideration given to Univac 1108 or CDC 6600 for technical &
>> scientific work, which had often been done on a 7094... and often at
>> extortionate time-lease terms. IOW it wasn't clear that the S/360 could
>> hack it for the latter - turned out that it was dreadfully slow but near
>> tolerable... if people worked late:-( and got much better later. Certainly
>> the performance of S/360 fell way short of expected performance as "sold" -
>> I can bore you with the details if you wish.:-)
>>
>> The CDC 6000 Series didn't become Cyber Series till ~1972[hazy]; before
>> that there was 6200, 6400, 6500 and 6600... and there was the notorious
>> 7600 in between. Dates of working hardware are difficult to pin down -
>> supposedly secret confidential data often went astray and announced
>> availability and deliverable were umm, fungible. The story is probably a
>> bit folklorish but no doubt that IBM was seriously threatened by Univac and
>> CDC in the technical computing arena.
>>
>
>Threatened? :-). The outlines of the folklore you report is the
>folklore I started my career with: CDC (later Cray) for hydro codes, IBM
>for W-2 forms. Lynn Wheeler's posts to comp.arch have helped me to
>understand how it was that IBM sold machines at all, because, as far as
>I could tell, they were expensive and slow, JCL was descended from some
>language used in Mordor, and the batch system was designed for people
>who knew ahead of time what resources a job would need (that is to say,
>it was designed for people counting W-2 forms and not for people doing
>research). My impression of IBM sofware was fixed by my early
>experience with the Scientific Subroutine Package, and even the
>compilers were buggy for the kinds of things I wanted to use--no problem
>for financial applications, where there was (as yet) no requirement for
>double precision complex arithmetic.

I remember, coming from working with S/360, getting my eyes opened when I
first saw a 6600 "installation" - terminals<gawp> (actually they called
them VDUs or something like that), in client cubicles, connected by wires
to the computer on a different floor... where you could actually page
through files. Clients got charged a bundle to use them mind you. I
recall saying to my colleagues at the time: "hey maybe one of those days
we'll all have one of those err, VDU thingys on every desk and we'll
program straight into the file on the computer and look at the results
there too - no more coding forms, punch cards or listings etc." They all
laughed like hell.

As for JCL, I once had a JCL evangelist explain to me how he could use JCL
in ways which wasn't possible on systems with simpler control statements -
conditional job steps, subsitution of actual file names for dummy
parameters etc... "catalogued procedures"?[hazy again] The guy was stuck
in his niche of "job steps" where data used to be massaged from one set of
tapes to another and then on in another step to be remassaged into some
other record format for storing on another set of tape... all those steps
being necessary, essentially because of the sequential tape storage. We'd
had disks for a while but all they did was emulate what they used to do
with tapes - he just didn't get it.

>One is tempted to summarize the Stretch/360 experiences as: "How IBM
>learned to love banks and to hate the bomb." In retrospect, IBM's
>misadventure with Stretch might be regarded as a stroke of luck. An
>analyst too close to the action might have regarded IBM's being pushed
>out of technical computing in the days of the Space Race as a distaster,
>but the heady days of "If it's technical, it must be worth doing" were
>actually over, and IBM was in the more lucrative line of business, anyway.

So much for analysts - plus ça change....

Rgds, George Macdonald

"Just because they're paranoid doesn't mean you're not psychotic" - Who, me??
Anonymous
a b à CPUs
June 18, 2004 1:29:23 AM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Felger Carbon wrocontemporariesiams" <krw@att.biz> wrote in message
> news:BvWdnVArbK4Kak3dRVn-sw@adelphia.com...
>> IIRC the Cyber6600 came significantly after April '64 too.
>
> According to the first sentence of Chapt 43 of Sieworek, Bell, and
> Newell's "Computer Structures: Principles and Examples", the
> first
> 6600 was delivered in Oct 64. The 6600 project was begun in the
> summer of 1960.

Ok, so they were contemporaries. ...hardly that IBM was somehow
shocked buy the 6600, so came out with the '360. The design point
for the '360 was to have a consistent ISA from top to bottom, even
though the underlying hardware was *quite* different. *That* was
the stroke of genius. Anyone can do amazing things with hardware
if one has a clean sheet of paper. ...and that was the norm at the
time. S/360 acknowledged that there was something more important
than hardware. ...and that is why it's still here.

--
Keith
Anonymous
a b à CPUs
June 18, 2004 1:45:58 AM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Robert Myers wrote:

> George Macdonald wrote:
>
>>
>> I guess it's likely folklore but I know that when the 7074 was to
>> be replaced in a certain office of a multinational corp in 1967,
>> the S/360 was the obvious and natural replacement for the DP side
>> of things; OTOH there was serious consideration given to Univac
>> 1108 or CDC 6600 for technical & scientific work, which had often
>> been done on a 7094... and often at
>> extortionate time-lease terms. IOW it wasn't clear that the
>> S/360 could hack it for the latter - turned out that it was
>> dreadfully slow but near
>> tolerable... if people worked late:-( and got much better later.
>> Certainly the performance of S/360 fell way short of expected
>> performance as "sold" - I can bore you with the details if you
>> wish.:-)
>>
>> The CDC 6000 Series didn't become Cyber Series till ~1972[hazy];
>> before that there was 6200, 6400, 6500 and 6600... and there was
>> the notorious
>> 7600 in between. Dates of working hardware are difficult to pin
>> down - supposedly secret confidential data often went astray and
>> announced
>> availability and deliverable were umm, fungible. The story is
>> probably a bit folklorish but no doubt that IBM was seriously
>> threatened by Univac and CDC in the technical computing arena.
>>
>
> Threatened? :-). The outlines of the folklore you report is the
> folklore I started my career with: CDC (later Cray) for hydro
> codes, IBM
> for W-2 forms. Lynn Wheeler's posts to comp.arch have helped me
> to understand how it was that IBM sold machines at all, because,
> as far as I could tell, they were expensive and slow, JCL was
> descended from some language used in Mordor, and the batch system
> was designed for people who knew ahead of time what resources a
> job would need (that is to say, it was designed for people
> counting W-2 forms and not for people doing
> research). My impression of IBM sofware was fixed by my early
> experience with the Scientific Subroutine Package, and even the
> compilers were buggy for the kinds of things I wanted to use--no
> problem for financial applications, where there was (as yet) no
> requirement for double precision complex arithmetic.
>
> One is tempted to summarize the Stretch/360 experiences as: "How
> IBM
> learned to love banks and to hate the bomb." In retrospect, IBM's
> misadventure with Stretch might be regarded as a stroke of luck.
> An analyst too close to the action might have regarded IBM's being
> pushed out of technical computing in the days of the Space Race as
> a distaster, but the heady days of "If it's technical, it must be
> worth doing" were actually over, and IBM was in the more lucrative
> line of business, anyway.

Ok, answer this question: Where is the money?

....even John Dillinger knew the answer! ;-)

--
Keith
Anonymous
a b à CPUs
June 18, 2004 7:14:48 AM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

"K Williams" <krw@att.biz> wrote in message
news:7_adndXvA80f10_d4p2dnA@adelphia.com...
>
> Ok, answer this question: Where is the money?
>
> ...even John Dillinger knew the answer! ;-)

Willie Sutton. Not John Dillinger. Gotta get yer Ne'er-do-well's
right. ;-)
Anonymous
a b à CPUs
June 18, 2004 8:01:21 AM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

K Williams wrote:

> Robert Myers wrote:
>

<snip>

>>
>>One is tempted to summarize the Stretch/360 experiences as: "How
>>IBM
>>learned to love banks and to hate the bomb." In retrospect, IBM's
>>misadventure with Stretch might be regarded as a stroke of luck.
>>An analyst too close to the action might have regarded IBM's being
>>pushed out of technical computing in the days of the Space Race as
>>a distaster, but the heady days of "If it's technical, it must be
>>worth doing" were actually over, and IBM was in the more lucrative
>>line of business, anyway.
>
>
> Ok, answer this question: Where is the money?
>
> ...even John Dillinger knew the answer! ;-)
>

It is the style of business and not the plentiful supply of money that
makes banks and insurance companies attractive as clients for IBM.
Under the right circumstance, money can pour from the heavens for
national security applications, and it will pour from the heavens for
biotechnology and entertainment. Whatever you may think of that kind of
business, IBM wants a piece of it.

From a technical standpoint, there is no company I know of better
positioned than IBM to dominate high performance computing, the future
of which is not x86 (and not Itanium, either). Will IBM do it? If the
past is any guide, IBM will be skunked again, but there is always a
first time.

RM
Anonymous
a b à CPUs
June 19, 2004 1:47:01 AM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Felger Carbon wrote:

> "K Williams" <krw@att.biz> wrote in message
> news:7_adndXvA80f10_d4p2dnA@adelphia.com...
>>
>> Ok, answer this question: Where is the money?
>>
>> ...even John Dillinger knew the answer! ;-)
>
> Willie Sutton. Not John Dillinger. Gotta get yer Ne'er-do-well's
> right. ;-)

Well... I don't go back quite as far as you, Felg. ;-)

--
Keith
Anonymous
a b à CPUs
June 19, 2004 2:03:05 AM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Robert Myers wrote:

> K Williams wrote:
>
>> Robert Myers wrote:
>>
>
> <snip>
>
>>>
>>>One is tempted to summarize the Stretch/360 experiences as: "How
>>>IBM
>>>learned to love banks and to hate the bomb." In retrospect,
>>>IBM's misadventure with Stretch might be regarded as a stroke of
>>>luck. An analyst too close to the action might have regarded
>>>IBM's being pushed out of technical computing in the days of the
>>>Space Race as a distaster, but the heady days of "If it's
>>>technical, it must be worth doing" were actually over, and IBM
>>>was in the more lucrative line of business, anyway.
>>
>>
>> Ok, answer this question: Where is the money?
>>
>> ...even John Dillinger knew the answer! ;-)
>>
>
> It is the style of business and not the plentiful supply of money
> that makes banks and insurance companies attractive as clients for
> IBM.

Certainly. ...and that's *exactly* my point.

> Under the right circumstance, money can pour from the heavens
> for national security applications, and it will pour from the
> heavens for
> biotechnology and entertainment. Whatever you may think of that
> kind of business, IBM wants a piece of it.

Nonsense. IBM does a coupla tens-o-$billions in commercial stuff
each year. There is no defined "government" market that's even
close. Even most government problems can be refined down to
"counting W2's".

> From a technical standpoint, there is no company I know of better
> positioned than IBM to dominate high performance computing, the
> future
> of which is not x86 (and not Itanium, either). Will IBM do it?

IMO, no. ...unless it fits into one of the research niches. The
HPC market is so muddled that IBM would be crazy to risk major
money jumping in. Certainly there is dabbling going on, and if
Uncle is going to fund research there will be someone to soak up
the grant.

> If the past is any guide, IBM will be skunked again, but there is
> always a first time.

You see it differently than the captains of the ship. The money is
where, well, the money is. It's a *lot* more profitable selling
what you know (and have) to customers you know (and need need what
you have), than to risk developing what someone thinks is needed,
but what he's not willing to pay for.

As much as you (and indeed I) may wish otherwise, IBM is *not* in
the risk business these days. If it's not a sure thing it will
simply not be funded. Sure a few bucks for another deep-purple or
a letterbox commercial works...

--
Keith
Anonymous
a b à CPUs
June 19, 2004 8:17:20 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

K Williams wrote:

>
> As much as you (and indeed I) may wish otherwise, IBM is *not* in
> the risk business these days. If it's not a sure thing it will
> simply not be funded. Sure a few bucks for another deep-purple or
> a letterbox commercial works...
>

I'm not smart enough to understand what's IBM and what's Wall Street,
but I agree with you that bold initiatives are something we should not
be looking for from IBM, and the wizards in Washington are as keen as
everyone else to buy off the shelf these days.

RM
Anonymous
a b à CPUs
June 20, 2004 5:29:28 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Robert Myers wrote:

> K Williams wrote:
>
>>
>> As much as you (and indeed I) may wish otherwise, IBM is *not* in
>> the risk business these days. If it's not a sure thing it will
>> simply not be funded. Sure a few bucks for another deep-purple
>> or a letterbox commercial works...
>>
>
> I'm not smart enough to understand what's IBM and what's Wall
> Street, but I agree with you that bold initiatives are something
> we should not be looking for from IBM, and the wizards in
> Washington are as keen as everyone else to buy off the shelf these
> days.

Exactly. Off-the-shelf is "cheap". ...even if it doesn't work. ;-)

--
Keith
Anonymous
a b à CPUs
June 20, 2004 11:01:34 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

K Williams wrote:

> Robert Myers wrote:
>
>
>>K Williams wrote:
>>
>>
>>>As much as you (and indeed I) may wish otherwise, IBM is *not* in
>>>the risk business these days. If it's not a sure thing it will
>>>simply not be funded. Sure a few bucks for another deep-purple
>>>or a letterbox commercial works...
>>>
>>
>>I'm not smart enough to understand what's IBM and what's Wall
>>Street, but I agree with you that bold initiatives are something
>>we should not be looking for from IBM, and the wizards in
>>Washington are as keen as everyone else to buy off the shelf these
>>days.
>
>
> Exactly. Off-the-shelf is "cheap". ...even if it doesn't work. ;-)
>

Is it too optimistic to imagine that we may be coming to some kind of
closure? That you can do so much with off-the-shelf hardware is both an
opportunity and a trap. The opportunity is that you can do more for
less. The trap is that you may not be able to do enough or nearly as
much as you might do if you were a bit more adventurous.

It apparently didn't take too many poundings from clusters of boxes at
supercomputer shows to drive both the customers and the manufacturers of
big iron into full retreat. The benchmark that has been used to create
and celebrate those artificial victories was almost _designed_ to create
such an outcome, and the Washington wizards, understandably tired of
being made fools of, have run up the white flag--with the exception of
the Cray X-1, which didn't get built without significant pressure.

I'm hoping that AMD makes commodity eight-way Opteron work and that it
is popular enough to drive significant market competition. Then my
battle cry will be: don't waste limited research resources trying to be
a clever computer builder--what can you do with whatever you want to
purchase or build that you can't do with an eight-way Opteron?

The possibilities for grand leaps just don't come from plugging
commodity boxes together, or even from plugging boards of commodity
processors together. If you can't make a grand leap, it really isn't
worth the bother (that's the statement that makes enemies for me--people
may not know how to do much else, but they sure do know how to run cable).

Just a few years ago, I thought commodity clusters were a great idea.
The more I look at the problem, the more I believe that off the shelf
should be really off the shelf, not do-it-yourself. It's not that the
do it yourself clusters can't do more for cheap--they can--they just
don't do enough more to make it really worth the bother.

Processors with *Teraflop* capabilities are a reality, and not just in
artificially inflated numbers for game consoles. Not only do those
teraflop chips wipe the floor with x86 and Itanium for the problems you
really need a breakthrough for, they don't need warehouses full of
routers, switches, and cable to get those levels of performance.

Clusters of very low-power chips, a la Blue Gene was not a dumb idea, it
just isn't bold enough--you still need those warehouses, a separate
power plant to provide power and cooling, and _somebody_ is paying for
the real estate, even if it doesn't show up in the price of the machine.
_Maybe_ some combination of Moore's law, network on a chip, and a
breakthrough in board level interconnect could salvage the future of
conventional microprocessors for "supercomputing," but right now, the
future sure looks like streaming processors to me, and not just because
they remind me of the Cray 1.

Streaming processors a slam dunk? Apparently not. They're hard to
program and inflexible. IBM is the builder of choice for them at the
moment. Somebody else, though, will have to come up with the money.

RM
Anonymous
a b à CPUs
June 21, 2004 1:35:17 AM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Robert Myers wrote:

> K Williams wrote:
>
>> Robert Myers wrote:
>>
>>
>>>K Williams wrote:
>>>
>>>
>>>>As much as you (and indeed I) may wish otherwise, IBM is *not*
>>>>in
>>>>the risk business these days. If it's not a sure thing it will
>>>>simply not be funded. Sure a few bucks for another deep-purple
>>>>or a letterbox commercial works...
>>>>
>>>
>>>I'm not smart enough to understand what's IBM and what's Wall
>>>Street, but I agree with you that bold initiatives are something
>>>we should not be looking for from IBM, and the wizards in
>>>Washington are as keen as everyone else to buy off the shelf
>>>these days.
>>
>>
>> Exactly. Off-the-shelf is "cheap". ...even if it doesn't work.
>> ;-)
>>
>
> Is it too optimistic to imagine that we may be coming to some kin
>of closure? That you can do so much with off-the-shelf hardware
> is both an opportunity and a trap. The opportunity is that you
> can do more for
> less. The trap is that you may not be able to do enough or nearly
> as much as you might do if you were a bit more adventurous.

Gee, fantasy meets reality, once again. The reality is that what we
have is "good enough". It's up to you softies to make your stuff
fit within the hard realities of physics. That is, it's *all*
about algorithms. Don't expect us hardware types to bail you out
of your problems anymore. We're knocking on the door of hard
physics, so complain to the guys across the Boneyard from MRL.

> It apparently didn't take too many poundings from clusters of
> boxes at supercomputer shows to drive both the customers and the
> manufacturers of
> big iron into full retreat.

Perhaps because *cheap* clusters could solve the "important"
problems, given enough thought? Of course the others are deemed to
be "unimportant", by definition. ...at least until there is a
solution. ;-)

> The benchmark that has been used to
> create and celebrate those artificial victories was almost
> _designed_ to create such an outcome, and the Washington wizards,
> understandably tired of being made fools of, have run up the white
> flag--with the exception of the Cray X-1, which didn't get built
> without significant pressure.

Ok...
>
> I'm hoping that AMD makes commodity eight-way Opteron work and
> that it
> is popular enough to drive significant market competition. Then
> my battle cry will be: don't waste limited research resources
> trying to be a clever computer builder--what can you do with
> whatever you want to purchase or build that you can't do with an
> eight-way Opteron?

I'm hoping for the same. ...albeit for a different reason.

> The possibilities for grand leaps just don't come from plugging
> commodity boxes together, or even from plugging boards of
> commodity
> processors together. If you can't make a grand leap, it really
> isn't worth the bother (that's the statement that makes enemies
> for me--people may not know how to do much else, but they sure do
> know how to run cable).

IMHO, we're not going to see any grand leaps in hardware. We have
some rather hard limits here. "186,000mi/sec isn't just a good
idea, it's the *LAW*", sort of thing.

No doubt were currently running into what ammounts to a technology
speedbump, but there *are* some hard limits were starting to see.
It's up to you algorithm types now. ;-)

> Just a few years ago, I thought commodity clusters were a great
> idea. The more I look at the problem, the more I believe that off
> the shelf
> should be really off the shelf, not do-it-yourself. It's not that
> the do it yourself clusters can't do more for cheap--they
> can--they just don't do enough more to make it really worth the
> bother.

Why should the hardware vendor anticipate what *you* want? You pay,
they listen. This is a simple fact of life.

> Processors with *Teraflop* capabilities are a reality, and not
> just in
> artificially inflated numbers for game consoles. Not only do
> those teraflop chips wipe the floor with x86 and Itanium for the
> problems you really need a breakthrough for, they don't need
> warehouses full of routers, switches, and cable to get those
> levels of performance.

So buy them. I guess I don't understand your problem. They're
reality, so...

> Clusters of very low-power chips, a la Blue Gene was not a dumb
> idea, it just isn't bold enough--you still need those warehouses,
> a separate power plant to provide power and cooling, and
> _somebody_ is paying for the real estate, even if it doesn't show
> up in the price of the machine.
> _Maybe_ some combination of Moore's law, network on a chip, and
> a
> breakthrough in board level interconnect could salvage the future
> of conventional microprocessors for "supercomputing," but right
> now, the future sure looks like streaming processors to me, and
> not just because they remind me of the Cray 1.

Yawn! So go *do* it. The fact is that it would be there if there
was a market. No, likely not from IBM, at least until someone else
proved there was $billions to be made. IBM is all about $billions.

> Streaming processors a slam dunk? Apparently not. They're hard
> to
> program and inflexible. IBM is the builder of choice for them at
> the
> moment. Somebody else, though, will have to come up with the
> money.

Builder, perhaps. Architect/proponent/financier? I don't think
so. ...at least not the way this peon sees things. I've had many
wishes over the years, This doesn't even come close to my list of
"good ideas wasted on dumb management",

--
Keith
Anonymous
a b à CPUs
June 21, 2004 4:10:34 AM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Rupert Pigott wrote:

> Robert Myers wrote:


>
>> Clusters of very low-power chips, a la Blue Gene was not a dumb idea,
>> it just isn't bold enough--you still need those warehouses, a separate
>> power plant to provide power and cooling, and _somebody_ is paying for
>> the real estate, even if it doesn't show up in the price of the machine.
>
>
> BG significantly raises the bar on density and power consumption. The
> real issue with it is can folks make use of it effectively ? As far as
> the mechanicals go the papers say that BG/L is scalable from a single
> shelf to the full warehouse..
>
> In fact the things which stand out about BG/L for me is how lean it is,
> and how they've designed the thing from the ground up with MTBF and
> servicing in mind. A bunch of whiteboxes hooked up by some 3rd party
> interconnect just can't beat that.
>

I think we're agreed on that.

<snip>

> "Compared with today's fastest supercomputers, it will be six times
> faster, consume 1/15th the power per computation and be 10 times more
> compact than today's fastest supercomputers"
>

Those are compelling numbers, even by the harsh standard I use, which is
to take the fourth root of the claimed miracle as the real payoff
(because that's how much more hydro you can really do).

We need to be aiming at qualitative changes in how we do business,
though. With network on a chip and significant improvements in
board-level packaging, maybe we can get there with conventional
microprocessors in a Blue Gene architecture--but unless there is some
miracle I don't know about in the offing, we're going to need those
improvements and more, especially since, if scaling really hasn't fallen
apart at 90nm, nobody is saying so.

By comparison, we can do teraflop on a chip _now_ with streaming
technology. That's really hard to ignore, and we do need those
teraflops, and more.

RM
Anonymous
a b à CPUs
June 21, 2004 9:35:00 AM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Robert Myers wrote:

[SNIP]

> By comparison, we can do teraflop on a chip _now_ with streaming
> technology. That's really hard to ignore, and we do need those
> teraflops, and more.

Yes, but can you do anything *useful* with that streaming pile of
TeraFLOP ? :) 

I still can't see what this Streaming idea is bringing to the table
that's fundamentally new. It still runs into the parallelisation
wall eventually, it's just Yet Another Coding Paradigm. :/ 

Cheers,
Rupert
Anonymous
a b à CPUs
June 21, 2004 5:42:39 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Rupert Pigott wrote:

> Robert Myers wrote:
>
> [SNIP]
>
>> By comparison, we can do teraflop on a chip _now_ with streaming
>> technology. That's really hard to ignore, and we do need those
>> teraflops, and more.
>
>
> Yes, but can you do anything *useful* with that streaming pile of
> TeraFLOP ? :) 
>

The long range forces part of the molecular dynamics calculation is
potentially a tight little loop where the fact that it takes many cycles
to compute a reciprocal square root wouldn't matter if the calculation
were streamed.

There are many such opportunities to do something useful. There are
circumstances where you can't do streaming parallelism naively because
of well-known pipeline hazards, but, as always, there are ways to cheat
the devil.

> I still can't see what this Streaming idea is bringing to the table
> that's fundamentally new. It still runs into the parallelisation
> wall eventually, it's just Yet Another Coding Paradigm. :/ 
>

In a conventional microprocessor, the movement of data and progress
toward the final answer are connected only in the most vaguely
conceptual way: out of memory, into the cache, into a register, into an
execution unit, into another register, back into cache,... blah, blah,
blah. All that chaotic movement takes time and, even more important,
energy. In a streaming processor, data physically move toward the exit
and toward a final answer.

Too simple a view? By a country mile to be sure. Some part of almost
all problems will need a conventional microprocessor. For problems that
require long range data movement, getting the streaming paradigm to work
even in the crudest way above the chip level will be... challenging.

Fortunately, there is already significant experience from graphics
programming with what can be accomplished by way of streaming
parallelism, and we don't have to count on anybody with a big checkbook
waking up from their x86 hangover to see these ideas explored more
thoroughly: Playstation 3 and the associated graphics workstation will
make it happen.

Yet Another Coding Pardigm? I can live with that, but I think it's a
more powerful paradigm than you do, plainly.

RM
Anonymous
a b à CPUs
June 21, 2004 5:45:01 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Rupert Pigott wrote:

> Robert Myers wrote:
>
> [SNIP]
>
>> By comparison, we can do teraflop on a chip _now_ with streaming
>> technology. That's really hard to ignore, and we do need those
>> teraflops, and more.
>
>
> Yes, but can you do anything *useful* with that streaming pile of
> TeraFLOP ? :) 
>

The long range forces part of the molecular dynamics calculation is
potentially a tight little loop where the fact that it takes many cycles
to compute a reciprocal square root wouldn't matter if the calculation
were streamed.

There are many such opportunities to do something useful. There are
circumstances where you can't do streaming parallelism naively because
of well-known pipeline hazards, but, as always, there are ways to cheat
the devil.

> I still can't see what this Streaming idea is bringing to the table
> that's fundamentally new. It still runs into the parallelisation
> wall eventually, it's just Yet Another Coding Paradigm. :/ 
>

In a conventional microprocessor, the movement of data and progress
toward the final answer are connected only in the most vaguely
conceptual way: out of memory, into the cache, into a register, into an
execution unit, into another register, back into cache,... blah, blah,
blah. All that chaotic movement takes time and, even more important,
energy. In a streaming processor, data physically move toward the exit
and toward a final answer.

Too simple a view? By a country mile to be sure. Some part of almost
all problems will need a conventional microprocessor. For problems that
require long range data movement, getting the streaming paradigm to work
even in the crudest way above the chip level will be... challenging.

Fortunately, there is already significant experience from graphics
programming with what can be accomplished by way of streaming
parallelism, and we don't have to count on anybody with a big checkbook
waking up from their x86 hangover to see these ideas explored more
thoroughly: Playstation 3 and the associated graphics workstation will
make it happen.

Yet Another Coding Pardigm? I can live with that, but I think it's a
more powerful paradigm than you do, plainly.

RM
Anonymous
a b à CPUs
June 21, 2004 10:20:36 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

K Williams wrote:

> Robert Myers wrote:
>

<snip>

>
> Gee, fantasy meets reality, once again. The reality is that what we
> have is "good enough". It's up to you softies to make your stuff
> fit within the hard realities of physics. That is, it's *all*
> about algorithms. Don't expect us hardware types to bail you out
> of your problems anymore. We're knocking on the door of hard
> physics, so complain to the guys across the Boneyard from MRL.
>

You seem to think that the complexity of the problems to be solved is
arbitrary, but it's not. It would be naive to assume that everything
possible has been wrung out of the algorithms, but it would be equally
naive to think that problems we want so badly to be able to solve will
ever be solved without major advances in hardware.

As to the physics...I wish I even had a clue.

>
>>It apparently didn't take too many poundings from clusters of
>>boxes at supercomputer shows to drive both the customers and the
>>manufacturers of
>>big iron into full retreat.
>
>
> Perhaps because *cheap* clusters could solve the "important"
> problems, given enough thought?

That's been the delusion, and that's exactly what it is: a delusion.

> Of course the others are deemed to
> be "unimportant", by definition. ...at least until there is a
> solution. ;-)
>

And that's why us "algorithm" types can't afford to ignore hardware: the
algorithms and even the problems we can solve are dictated by hardware.

<snip>

>
>>The possibilities for grand leaps just don't come from plugging
>>commodity boxes together, or even from plugging boards of
>>commodity
>>processors together. If you can't make a grand leap, it really
>>isn't worth the bother (that's the statement that makes enemies
>>for me--people may not know how to do much else, but they sure do
>>know how to run cable).
>
>
> IMHO, we're not going to see any grand leaps in hardware. We have
> some rather hard limits here. "186,000mi/sec isn't just a good
> idea, it's the *LAW*", sort of thing.
>

For the purpose of doing computational physics, the speed of light is a
limitation on how long it takes to satisfy data dependencies in a single
computational step. For the bogey protein-folding calculation in Allen.
et. al., we need to do 10^11 steps. One microsend is 300 meters (3x10^8
m/s x 10^-6 s). If we can jam the computer into a 300 meter sphere,
then a calculation that took one crossing time per time step would take
10^5 seconds, or about 30 hours. The Blue Gene document estimates 3
years for such a calculation, thereby allowing for more like 1000 speed
of light crossings per time step. To make the calculation go faster, we
need to reduce the number of speed of light crossings required or to
reduce the size of the machine.

> No doubt were currently running into what ammounts to a technology
> speedbump, but there *are* some hard limits were starting to see.
> It's up to you algorithm types now. ;-)
>

All previous predictions of the end of the road have turned out to be
premature, so I'm hesitant to join the chorus now, no matter how clear
the signs may seem to be.

<snip>

>
>>Processors with *Teraflop* capabilities are a reality, and not
>>just in
>>artificially inflated numbers for game consoles. Not only do
>>those teraflop chips wipe the floor with x86 and Itanium for the
>>problems you really need a breakthrough for, they don't need
>>warehouses full of routers, switches, and cable to get those
>>levels of performance.
>
> So buy them. I guess I don't understand your problem. They're
> reality, so...
>

Before silicon comes a simulation model, and there are, indeed, better
ways to be approaching that problem than to be chatting about it on csiphc.

<snip>

>
>>Streaming processors a slam dunk? Apparently not. They're hard
>>to
>>program and inflexible. IBM is the builder of choice for them at
>>the
>>moment. Somebody else, though, will have to come up with the
>>money.
>
>
> Builder, perhaps. Architect/proponent/financier? I don't think
> so. ...at least not the way this peon sees things. I've had many
> wishes over the years, This doesn't even come close to my list of
> "good ideas wasted on dumb management",
>

IBM, and those who might be concerned with what might happens to the
technical capabilities it might possess, have more pressing concerns
than whether IBM should be going into supercomputers or not, and I don't
think IBM should, so we seem to be agreed about that.

RM
Anonymous
a b à CPUs
June 21, 2004 11:36:50 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

In article <rfp3d0livl7lj5st0v2cj8bdho9u3aoejm@4ax.com>,
George Macdonald <fammacd=!SPAM^nothanks@tellurian.com> writes:
<snip>
> As for JCL, I once had a JCL evangelist explain to me how he could use JCL
> in ways which wasn't possible on systems with simpler control statements -
> conditional job steps, subsitution of actual file names for dummy
> parameters etc... "catalogued procedures"?[hazy again] The guy was stuck
> in his niche of "job steps" where data used to be massaged from one set of
> tapes to another and then on in another step to be remassaged into some
> other record format for storing on another set of tape... all those steps
> being necessary, essentially because of the sequential tape storage. We'd
> had disks for a while but all they did was emulate what they used to do
> with tapes - he just didn't get it.
>
I used to do JCL, back when I ran jobs on MVS. After getting used to it,
and the fact that you allocated or deleted files using the infamous
IEFBR14, there were things to recommend it. At the very least, you edited
your JCL, and it all stayed put. Then you submitted, and it was in the
hands of the gods. None (or very little, because there were ways to kill
a running job) of this Oops! and hit Ctrl-C.

I never had to deal with tapes, fortunately. It was also frustrating not
having dynamic filenames. There were ways to weasel around some of those
restrictions, though.

Dale Pontius
Anonymous
a b à CPUs
June 21, 2004 11:40:32 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

In article <9ji1d0htgdbqorgc3rkqbanrvm952l62sl@4ax.com>,
daytripper <day_trippr@REMOVEyahoo.com> writes:
> On Wed, 16 Jun 2004 18:19:39 GMT, Robert Redelmeier <redelm@ev1.net.invalid>
> wrote:
>
>>Robert Myers <rmyers1400@comcast.net> wrote:
>>> Leave out the technical issues. If Intel/HP have to climb down from the
>>> fortress they've built around Itanium, how will they ever pull it off?
>>> It would be like IBM admitting that maybe System 360 wasn't such a great
>>> idea, after all (which, who knows, maybe it wasn't).
>>
>>Whatever one thinks about the technical merits of S/360,
>>the commercial success was undeniable.
>
> I think the technical merits were right up there as well.
> What other system had a control store that required an air-pump to operate?;-)
>
When a former boss had a service anniversay, they brought him a 'gift'.
It was one of those thingies that needed an air pump to operate, also
known as CCROS. I suspect it meant Capacitive-Coupled Read-Only Storage.
The slick thing was that it was a ROM you could progam with a keypunch.
Not very dense, though. 36KB in 2 or 3 cubic feet.

Dale Pontius
Anonymous
a b à CPUs
June 23, 2004 2:13:07 AM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Dale Pontius wrote:

<snip>

>
> The question for IA64 becomes can it bring enough to the table on future
> revisions to make up for its obstacles. Will >8-way become compelling,
> and a what price? At this point, AMD is trying to push its Opteron ASPs
> up, but probably has more flex room than IA64 or Xeon.
>

At this point, Itanium is _still_ mostly expectation. My point in
commenting on the book that started the thread is that Intel seemed to
have no interest in lowering expectations about Itanium.

Intel will do _something_ to diminish the handicap that Itanium
currently has due to in-order execution. The least painful thing that
Intel can do, as far as I understand things, is to use speculative
slices as a prefetch mechanism. That gets a big piece of the advantages
of OoO without changing the main thread control logic at all. Whether
that strategy works at an acceptable cost in transistors and power is
another question.

That single change could rewrite the rules for Itanium, because it will
take much of the heat off compilation and allow people more frequently
actually to see the kind of performance that Itanium now seems to
produce mostly only in benchmarks.

As to cost, Intel have made it clear that they are prepared to do
whatever they have to do to make the chip competitive.

As to how the big (more than 8-way) boxes behave, that's up to the
people who build the big boxes, isn't it? The future big boxes will
depend on board level interconnect and switching infrastructure, and if
anybody knows what that is going to look like in Intel's PCI Express
universe, I wish they'd tell me.

It gets harder to stick with the position all the time, but you still
have to take a deep breath when betting against Intel. The message
Intel wants you to hear is: IA-64 for mission critical big stuff, IA-32
for not-so-critical, not-so-big stuff.

No marketing baloney for you and you don't care what Intel wants you to
hear? That's reasonable and to be expected from technical people.
Itanium is where they intend to put their resources and support for high
end applications, and they apparently have no intention of backing away
from that. Feel free to ignore what they're spending so much money to
tell you. It's your nickel.

RM
Anonymous
a b à CPUs
June 23, 2004 2:18:31 AM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Mon, 21 Jun 2004 19:50:55 -0400, dale@edgehp.invalid (Dale Pontius) wrote:
>One simple question about IA64...
>
>What and whose problem does it solve?
>
>As far as I can tell, its prime mission is to solve Intel's problem, and
>rid it of those pesky cloners from at least some segments of the CPU
>marketplace, hopefully an expanding portion.
>
>It has little to do with customers' problems, in fact it makes some
>problems for customers. (Replace ALL software? Why is this good for
>ME?)

I love the smell of irony in the evening...

The need for humongous non-segmented memory space is a driver for "wider
addressing than ia32 provided" architectures.

The real irony is, after years of pain for everyone involved, the ia64 may
just find itself in the dustbin of perpetual non-starters because the pesky
CLONER came up with a "painless" way to extend memory addressing!

/daytripper (simply delicious stuff ;-)
Anonymous
a b à CPUs
June 24, 2004 12:46:26 AM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

In article <T92Cc.71530$2i5.7652@attbi_s52>,
Robert Myers <rmyers1400@comcast.net> writes:
> Dale Pontius wrote:
>
> <snip>
>
>>
>> The question for IA64 becomes can it bring enough to the table on future
>> revisions to make up for its obstacles. Will >8-way become compelling,
>> and a what price? At this point, AMD is trying to push its Opteron ASPs
>> up, but probably has more flex room than IA64 or Xeon.
>>
>
> At this point, Itanium is _still_ mostly expectation. My point in
> commenting on the book that started the thread is that Intel seemed to
> have no interest in lowering expectations about Itanium.
>
> Intel will do _something_ to diminish the handicap that Itanium
> currently has due to in-order execution. The least painful thing that
> Intel can do, as far as I understand things, is to use speculative
> slices as a prefetch mechanism. That gets a big piece of the advantages
> of OoO without changing the main thread control logic at all. Whether
> that strategy works at an acceptable cost in transistors and power is
> another question.
>
> That single change could rewrite the rules for Itanium, because it will
> take much of the heat off compilation and allow people more frequently
> actually to see the kind of performance that Itanium now seems to
> produce mostly only in benchmarks.
>
Development cost is a different thing to Intel than to most of the rest
of us. I've heard of "Intellian Hordes," (my perversion of Mongolian)
and that it sounds tough to me to coordinate the sheer number of people
they have working on a project. I contrast that with the small team we
have on projects, and our perpetual fervent wish for just a few more
people.

> As to cost, Intel have made it clear that they are prepared to do
> whatever they have to do to make the chip competitive.
>
> As to how the big (more than 8-way) boxes behave, that's up to the
> people who build the big boxes, isn't it? The future big boxes will
> depend on board level interconnect and switching infrastructure, and if
> anybody knows what that is going to look like in Intel's PCI Express
> universe, I wish they'd tell me.
>
Actually, it's none of my business, except as an interested observer. I
don't ever forsee that kind of hardware in my home, and I don't oversee
purchases of that kind of equipment.

> It gets harder to stick with the position all the time, but you still
> have to take a deep breath when betting against Intel. The message
> Intel wants you to hear is: IA-64 for mission critical big stuff, IA-32
> for not-so-critical, not-so-big stuff.
>
My one stake in the IA-64 vs X86-64/IA-32e debate is that I have some
wish to run EDA software on my home machine. I like to have dinner with
the family, and it's about a half-hour each way to/from work. Having
EDA on Linux at home means I can do O.T. after dinner without a drive.

I currently have IA-32 and run EDA software, but that stuff is moving to
64-bit. I can foresee having X86-64 in my own home in the near future,
which keeps me capable. I can't see the horizon where I'll have IA-64
in my home, at the moment. In addition to EDA software, my IA-32
machine also does Internet stuff, plays Quake3, and other clearly non-
work related things. Actually, the work is the extra mission.

> No marketing baloney for you and you don't care what Intel wants you to
> hear? That's reasonable and to be expected from technical people.
> Itanium is where they intend to put their resources and support for high
> end applications, and they apparently have no intention of backing away
> from that. Feel free to ignore what they're spending so much money to
> tell you. It's your nickel.
>
Marketing baloney or not, it's really irrelevant at the moment. I'm a
home user, and Intel's roadmap doesn't put IA-64 in front of me for the
visible horizon. Nor do I have anything to say about purchasing that
calibre of machines at work. I *have* expressed my preference about
seeing EDA software on X86-64 - for the purpose of running it on a home
machine. So not only is it my nickel, they're not even asking me for
it. Any ruminations about IA-64 vs X86-64 are merely that - technical
discussion and ruminations. Anything they're spending money telling me
now is simply cheerleading.

For that matter, since IA-64 isn't on the Intel roadmap for home users
yet, I could well buy an X86-64 machine in the next year or two. When
it's time to step up again, I can STILL examine the IA-64 decision vs
whatever else is on the market, then.

Put simply, at the moment my choices are IA-32, X86-64, and Mac.
Period. Any discussion of IA-64 is just that -discussion, *because* I'm
a technical person.

Dale Pontius
Anonymous
a b à CPUs
June 24, 2004 8:05:54 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Dale Pontius wrote:

<snip>

>
> Development cost is a different thing to Intel than to most of the rest
> of us. I've heard of "Intellian Hordes," (my perversion of Mongolian)
> and that it sounds tough to me to coordinate the sheer number of people
> they have working on a project. I contrast that with the small team we
> have on projects, and our perpetual fervent wish for just a few more
> people.
>

No matter how it turns out, Itanium should be safely in the books for
case studies at schools of management. To my eye, the opportunities and
challenges resemble the opportunities and challenges of big aerospace.
NASA isn't the very best example, but it's the easiest to talk about.
If you have unlimited resources and you're damned and determined to put
a man on the moon, you can do it, no matter how many people you have to
manage to get there. In the aftermath of Apollo, though, with shrinking
budgets and a chronic need to oversell, NASA delivered a Shuttle program
that many see as poorly conceived and executed. Intel and Itanium are
still in the Apollo era in terms of resources.

>
> My one stake in the IA-64 vs X86-64/IA-32e debate is that I have some
> wish to run EDA software on my home machine. I like to have dinner with
> the family, and it's about a half-hour each way to/from work. Having
> EDA on Linux at home means I can do O.T. after dinner without a drive.
>
> I currently have IA-32 and run EDA software, but that stuff is moving to
> 64-bit. I can foresee having X86-64 in my own home in the near future,
> which keeps me capable. I can't see the horizon where I'll have IA-64
> in my home, at the moment. In addition to EDA software, my IA-32
> machine also does Internet stuff, plays Quake3, and other clearly non-
> work related things. Actually, the work is the extra mission.
>

<snip>

> For that matter, since IA-64 isn't on the Intel roadmap for home users
> yet, I could well buy an X86-64 machine in the next year or two. When
> it's time to step up again, I can STILL examine the IA-64 decision vs
> whatever else is on the market, then.
>
> Put simply, at the moment my choices are IA-32, X86-64, and Mac.
> Period. Any discussion of IA-64 is just that -discussion, *because* I'm
> a technical person.
>

The one thing you might care about would be the possibility that the
standard environment for EDA went from x86/Linux to ia64/Whatever. That
could still happen, but it seems like a distant prospect right now.
Itanium seems most plausible to prevail over x86-64 in proprietary
software with high license fees, but that kind of software isn't
generally running next to Quake3 now and probably won't ever be.

RM
Anonymous
a b à CPUs
June 25, 2004 1:49:39 AM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Dale Pontius wrote:

> In article <rfp3d0livl7lj5st0v2cj8bdho9u3aoejm@4ax.com>,
> George Macdonald <fammacd=!SPAM^nothanks@tellurian.com> writes:
> <snip>
>> As for JCL, I once had a JCL evangelist explain to me how he
>> could use JCL in ways which wasn't possible on systems with
>> simpler control statements - conditional job steps, subsitution
>> of actual file names for dummy
>> parameters etc... "catalogued procedures"?[hazy again] The guy
>> was stuck in his niche of "job steps" where data used to be
>> massaged from one set of tapes to another and then on in another
>> step to be remassaged into some other record format for storing
>> on another set of tape... all those steps
>> being necessary, essentially because of the sequential tape
>> storage. We'd had disks for a while but all they did was emulate
>> what they used to do with tapes - he just didn't get it.
>>
> I used to do JCL, back when I ran jobs on MVS. After getting used
> to it, and the fact that you allocated or deleted files using the
> infamous IEFBR14, there were things to recommend it.

I didn't have much problem with JCL either, and found it rather
powerful. (and one only needed IEFBR14 for cleanup detail).

> At the very
> least, you edited your JCL, and it all stayed put. Then you
> submitted, and it was in the hands of the gods. None (or very
> little, because there were ways to kill a running job) of this
> Oops! and hit Ctrl-C.

If it was your job, it was rather easy to kill. Of course I
remember when even MVS was about as secure as MSDOS. I learned
much of my MVS stuff (including what initiators were "hot") by
walking through others' JCL and code. Even the "protection"
wasn't. Simply copy the file to another pack and delete it from
the VTOC where it was originally and re-catalog it. Of course RACF
ruined all my fun. ;-) Then there were ways of "hiding" who one
was (start TSO in the background, and submit a job from there hid
one's identity). ...much more fun than the incomprehensible *ix
stuff. ;-)

> I never had to deal with tapes, fortunately. It was also
> frustrating not having dynamic filenames. There were ways to
> weasel around some of those restrictions, though.

Dynamic file names weren't a problem, AFAIR.

--
Keith
Anonymous
a b à CPUs
June 25, 2004 1:54:06 AM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

daytripper wrote:

> On Mon, 21 Jun 2004 19:50:55 -0400, dale@edgehp.invalid (Dale
> Pontius) wrote:
>>One simple question about IA64...
>>
>>What and whose problem does it solve?
>>
>>As far as I can tell, its prime mission is to solve Intel's
>>problem, and rid it of those pesky cloners from at least some
>>segments of the CPU marketplace, hopefully an expanding portion.
>>
>>It has little to do with customers' problems, in fact it makes
>>some
>>problems for customers. (Replace ALL software? Why is this good
>>for ME?)
>
> I love the smell of irony in the evening...

I rather like my wife doing that in the morning, so I have crisp
shirts to wear (and if you believe that...).
>
> The need for humongous non-segmented memory space is a driver for
> "wider addressing than ia32 provided" architectures.

But, but, bbbb, everone *knows* there is no reason for 64b
processors on the desktop! Intel says so.

> The real irony is, after years of pain for everyone involved, the
> ia64 may just find itself in the dustbin of perpetual non-starters
> because the pesky CLONER came up with a "painless" way to extend
> memory addressing!

Are you implying that Intel dropped a big ball? ...or a little one,
BIG-TIME!

> /daytripper (simply delicious stuff ;-)

Indeed. ...though remember; no one needs 64bits. no one needs
64bits. no one needs 64bits. no one, no one, no...

--
Keith
Anonymous
a b à CPUs
June 25, 2004 2:01:55 AM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Robert Myers wrote:

> Dale Pontius wrote:
>
> <snip>
>
>>
>> Development cost is a different thing to Intel than to most of
>> the rest
>> of us. I've heard of "Intellian Hordes," (my perversion of
>> Mongolian) and that it sounds tough to me to coordinate the sheer
>> number of people
>> they have working on a project. I contrast that with the small
>> team we have on projects, and our perpetual fervent wish for just
>> a few more people.
>>
>
> No matter how it turns out, Itanium should be safely in the books
> for
> case studies at schools of management.

Rather like the Tacoma Narrows Bridge movie is required viewing for
all freshmen engineers? ;-)

> To my eye, the
> opportunities and challenges resemble the opportunities and
> challenges of big aerospace. NASA isn't the very best example, but
> it's the easiest to talk about. If you have unlimited resources
> and you're damned and determined to put a man on the moon, you can
> do it, no matter how many people you have to
> manage to get there.

....but Intel hasn't gotten there yet, if they ever will.

> In the aftermath of Apollo, though, with
> shrinking budgets and a chronic need to oversell, NASA delivered a
> Shuttle program
> that many see as poorly conceived and executed. Intel and Itanium
> are still in the Apollo era in terms of resources.

No IMHO, Intel missed the moon and the Shuttle, and went directly
to the politics of the International Space Station. ...A mission
without a requirement.

> <snip>
<ditto>

>> For that matter, since IA-64 isn't on the Intel roadmap for home
>> users
>> yet, I could well buy an X86-64 machine in the next year or two.
>> When it's time to step up again, I can STILL examine the IA-64
>> decision vs whatever else is on the market, then.
>>
>> Put simply, at the moment my choices are IA-32, X86-64, and Mac.
>> Period. Any discussion of IA-64 is just that -discussion,
>> *because* I'm a technical person.
>>
>
> The one thing you might care about would be the possibility that
> the
> standard environment for EDA went from x86/Linux to ia64/Whatever.
> That could still happen, but it seems like a distant prospect
> right now. Itanium seems most plausible to prevail over x86-64 in
> proprietary software with high license fees, but that kind of
> software isn't generally running next to Quake3 now and probably
> won't ever be.

I know several EDA folks have been reluctant to support Linux and
instead support Windows, for at least the low-end stuff (easier to
restrict licensing). I don't see anyone seriously going for IPF
though. It is *expensive* supporting new platforms. ...which is
why x86-64 is so attractive.

--
Keith
Anonymous
a b à CPUs
June 25, 2004 7:27:50 AM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

K Williams wrote:

> Robert Myers wrote:
>
>
>>K Williams wrote:
>>
>>
>>>Robert Myers wrote:
>>>

>>
>>I'm not sure what kind of complexity you are imagining. Garden
>>variety microprocessers are already implausibly complicated as far
>>as I'm concerned.
>
>
> I guess I'm trying to figure out exactly *what* you're driving at.
> Performance comes with arrays of processors or complex processors.
> Depending on the application, either may win, but there aren't any
> simple-uniprocessors at the high-end. We're long past that
> possibility.
>
>
>>I have some fairly aggressive ideas about what *might* be done
>>with computers, but they don't necessarily lead to greatly
>>complicated
>>machines. Complicated switching fabric--probably.
>
>
> Ok, now we're back to arrays. ...something which I thought you were
> whining about "last" week.
>

If by an array you mean a stream of data and insructions, I suppose
that's general enough.

As to what I want...I think Iain McClatchie did well enough in
presenting what I thought might have been done with Blue Gene in talking
about his "WIZZIER processor" on comp.arch. You can do it for certain
classes of problems...no one doubts that. You can do it with ASIC's if
you've got the money...no one doubts that. Can you build a
general-purpose "supercomputer" that way? Not easily.

We are, in any case, a long way from exhausting the architectural
possibilities.

>
>>>>As to the physics...I wish I even had a clue.
>>>
>>>
>>>Gee, I thought you were plugged into that "physics" stuff too.
>>>Perhaps you just like busting concrete? ;-)
>>>
>>
>>No. I started out, in fact, in the building across the boneyard
>>from
>>MRL. I understand the physical limitations well enough. What I
>>don't know about is what might be done to get around those
>>limitations.
>
>
> ...and neither does anyone else. Many people are hard at work
> re-inventing physics. The last time I remember a significant
> speed-bump IBM invested ten-figures in a synchrotron for x-ray
> lithography.

I thought the grand illusion was e-beam lithography.

> Smarter people came up with the diffraction masks.
> Sure, some of these smarter people will come around again, but the
> problems go up exponentially as the feature size shrinks.
>

I'm looking for improvements from: low power operation (the basic
strategy of Blue Gene), improvements in packaging (Sun's slice of the
DARPA pie being one idea, albeit one I'm not crazy about), using
pipelines creatively and aggressively, and more efficient handling of
the movement of instructions and data. If we get better or even
acceptable power-frequency scaling with further scale shrinks,
naturally, I'll take it, but I'm not counting on it.

RM
Anonymous
a b à CPUs
June 25, 2004 8:11:51 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

K Williams wrote:
> Robert Myers wrote:
>
>
>>Dale Pontius wrote:
>>
>><snip>
>>
>>>Development cost is a different thing to Intel than to most of
>>>the rest
>>>of us. I've heard of "Intellian Hordes," (my perversion of
>>>Mongolian) and that it sounds tough to me to coordinate the sheer
>>>number of people
>>>they have working on a project. I contrast that with the small
>>>team we have on projects, and our perpetual fervent wish for just
>>>a few more people.
>>>
>>
>>No matter how it turns out, Itanium should be safely in the books
>>for
>>case studies at schools of management.
>
>
> Rather like the Tacoma Narrows Bridge movie is required viewing for
> all freshmen engineers? ;-)
>
>
>>To my eye, the
>>opportunities and challenges resemble the opportunities and
>>challenges of big aerospace. NASA isn't the very best example, but
>>it's the easiest to talk about. If you have unlimited resources
>>and you're damned and determined to put a man on the moon, you can
>>do it, no matter how many people you have to
>>manage to get there.
>
>
> ...but Intel hasn't gotten there yet, if they ever will.
>
>
>>In the aftermath of Apollo, though, with
>>shrinking budgets and a chronic need to oversell, NASA delivered a
>>Shuttle program
>>that many see as poorly conceived and executed. Intel and Itanium
>>are still in the Apollo era in terms of resources.
>
>
> No IMHO, Intel missed the moon and the Shuttle, and went directly
> to the politics of the International Space Station. ...A mission
> without a requirement.
>

The comparison to the International Space Station doesn't seem
especially apt. I made the comparison to Apollo only to make the point
that neither ambitious objectives nor the need to bring enormous
resources to bear doom an enterprise to failure. Who knows how the
Shuttle, which was not a well-conceived undertaking to begin with, might
have fared without the ruinous political and budgetary pressure to which
the program was subjected. By comparison, Intel seems not to have
followed the path of publicly-funded technology, which is to starve
troubled programs, thereby guaranteeing even more trouble.

One is tempted to make the comparison to hot fusion, a program that,
after decades of lavish funding, has entered an old-age pension phase.
Both hot fusion and itanium had identifiable problems involving basic
science, and in neither case have those problems yet been solved. With
Itanium, the misconception (that static scheduling can do the job) may
be so severe that the problem can't be fixed in a sastisfactory way. As
to hot fusion, who knows...the physics are infinitely more complicated
that the bare Navier-Stokes equations, which themselves are the subject
of one of the Clay Institute's Millenium Problems.

Both Itanium and hot fusion have been overtaken by events. Hot fusion
has become less compelling as other less Faustian schemes for energy
production have become ever more attractive. In the case of Itanium,
who would ever have imagined that x86 would become so good? In
retrospect, an easy call, but if it were so easy in prospect, lots of
things might have happened differently. Should one fault Intel for not
forseeing the attack of the out-of-order x86? Quite possibly, but I
wouldn't claim to understand the history well enough to make that judgment.

<snip>

>
> I know several EDA folks have been reluctant to support Linux and
> instead support Windows, for at least the low-end stuff (easier to
> restrict licensing).

Right now, Linux is hostile territory for compiled binaries because of
shared libaries. Windows has an equivalent issue with "DLL hell," but
Microsoft never pretended it wasn't a problem (What's the problem? Just
recompile from source.) and has been working at solving it, not
completely without success. I'm sure the Free Software Foundation would
be just as happy if the problem were never addressed, and the biggest
problems I've encountered with GLIBC, but with Linux spending so much of
its time playing a real OS on TV, it seems inevitable that it will be
addressed. For the moment, though, companies like IBM can't be
completely unhappy that professional support or hacker status is almost
a necessity for using proprietary applications with Linux.

> I don't see anyone seriously going for IPF
> though. It is *expensive* supporting new platforms. ...which is
> why x86-64 is so attractive.
>

Intel's real mistake with Itanium, I think. It's a problem even for
PowerPC.

RM
Anonymous
a b à CPUs
July 2, 2004 12:25:48 AM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

In article <hJKdnXalDN7JFEbdRVn-jA@adelphia.com>,
K Williams <krw@att.biz> writes:
> Robert Myers wrote:
>
<snip>
>> In the aftermath of Apollo, though, with
>> shrinking budgets and a chronic need to oversell, NASA delivered a
>> Shuttle program
>> that many see as poorly conceived and executed. Intel and Itanium
>> are still in the Apollo era in terms of resources.
>
> No IMHO, Intel missed the moon and the Shuttle, and went directly
> to the politics of the International Space Station. ...A mission
> without a requirement.
>
Every now and then, I have to pop up and defend the ISS.

I must agree that at the moment, the ISS has practically NO value to
science. But I must disagree that it has NO value, at all.

At one point it had, and perhaps may have again, value in diplomacy
and fostering international cooperation.

But IMHO the real value of the ISS is not as a SCIENCE experiment, but
as an ENGINEERING experiment. The fact that we're having such a tough
time with it indicates that it is a HARD problem. It's clearly a third
generation space station. The first generation was preassembled, like
Skylab and Salyut, perhaps with a little unfurling and maybe a gizmo or
two docked, but primarly ground-assembled, and sent up. The second
generation was Mir, with a bunch of ground-assembled pieces sent up and
docked. There's some on-orbit assembly, but it's still largely a thing
of the ground.

The ISS has modules all built on the ground, obviously. But the on-
orbit assembly is well beyond that of Mir. It's the next step of a
logical progression.

Some look and say it's hard, let's stop. I say that until we solve the
'minor' problems of the ISS, we're NEVER going to get to anything like
Von Braun's (or 2001: ASO) wheels. Zubrin's proposal, in order to avoid
requiring an expensive space station, went to the extreme of having
nothing to do with one, even if it already were to exist. But until we
get to some sort of on-orbit, or at least off-Earth assembly capability
we're going to be limited to something in the 30ft-or-less diameter
that practically everything we've ever sent up has had.

Oh, the ISS orbit is another terrible obstacle. But at the moment, it
clearly permits Russian launches, and would be in even worse trouble,
without.

But IMHO, the ENGINEERING we're learning, however reluctantly and
slowly, is ESSENTIAL to future steps in space.

Dale Pontius
Anonymous
a b à CPUs
July 4, 2004 4:03:47 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Dale Pontius wrote:

> In article <hJKdnXalDN7JFEbdRVn-jA@adelphia.com>,
> K Williams <krw@att.biz> writes:
>> Robert Myers wrote:
>>
> <snip>
>>> In the aftermath of Apollo, though, with
>>> shrinking budgets and a chronic need to oversell, NASA delivered
>>> a Shuttle program
>>> that many see as poorly conceived and executed. Intel and
>>> Itanium are still in the Apollo era in terms of resources.
>>
>> No IMHO, Intel missed the moon and the Shuttle, and went
>> directly
>> to the politics of the International Space Station. ...A mission
>> without a requirement.
>>
> Every now and then, I have to pop up and defend the ISS.

Ok, I'll play devil. ;-)

> I must agree that at the moment, the ISS has practically NO value
> to science. But I must disagree that it has NO value, at all.
>
> At one point it had, and perhaps may have again, value in
> diplomacy and fostering international cooperation.

Where's the beef? I *did* say "to the *politics* (emphasis added)
of the International Space Station". ;-)

> But IMHO the real value of the ISS is not as a SCIENCE experiment,
> but as an ENGINEERING experiment. The fact that we're having such
> a tough time with it indicates that it is a HARD problem. It's
> clearly a third generation space station. The first generation was
> preassembled, like Skylab and Salyut, perhaps with a little
> unfurling and maybe a gizmo or two docked, but primarly
> ground-assembled, and sent up. The second generation was Mir, with
> a bunch of ground-assembled pieces sent up and docked. There's
> some on-orbit assembly, but it's still largely a thing of the
> ground.

It's absolutely an engineering experiment. We already knew the
"science". Though there are problems, it went together more easily
than most erector-set projects (surprising all). The problems,
IMO, have been mostly political (and as a subset, financial).

> The ISS has modules all built on the ground, obviously. But the
> on- orbit assembly is well beyond that of Mir. It's the next step
> of a logical progression.

Progression to what? I see no grand-plan that requires ISS.
Freedom was cut down to "Fred", because of the massive costs, then
morfed into ISS when it turned into a political tool.

> Some look and say it's hard, let's stop. I say that until we solve
> the 'minor' problems of the ISS, we're NEVER going to get to
> anything like Von Braun's (or 2001: ASO) wheels. Zubrin's
> proposal, in order to avoid requiring an expensive space station,
> went to the extreme of having nothing to do with one, even if it
> already were to exist. But until we get to some sort of on-orbit,
> or at least off-Earth assembly capability we're going to be
> limited to something in the 30ft-or-less diameter that practically
> everything we've ever sent up has had.

I simply don't see ISS as interesting science or engineering. It's
a cut-down compromise done on the cheap with a very foggy mission
statement. It seems politics rules any possible science. There
was a good article (titled "1000 days", or some such) on this in
the last issue of _Air_and_Space_.

> Oh, the ISS orbit is another terrible obstacle. But at the moment,
> it clearly permits Russian launches, and would be in even worse
> trouble, without.

Sure. A 57 degree inclination is useful for other reasons, as well.
The 25 degree orbit out of the cape would save little, other than
fuel. A polar or even sun-synchronous orbit would be "interesting"
too, but for "other" reasons, which wouldn't be in the spirit of
the ISS. ;-)

> But IMHO, the ENGINEERING we're learning, however reluctantly and
> slowly, is ESSENTIAL to future steps in space.

I disagree, in that ISS isn't doing what was promised. It is not
providing anything essential to the progress, since we don't even
know what we're progressing to.

--
Keith

> Dale Pontius
!