Sign in with
Sign up | Sign in
Your question

Intel strikes back with a parallel x86 design

Last response: in CPUs
Share
Anonymous
a b à CPUs
September 23, 2005 10:11:09 PM

Archived from groups: comp.arch,comp.sys.intel,comp.sys.ibm.pc.hardware.chips (More info?)

Signs and portents as JMS would say.

Stevel Jobs does a 180' and enthusiastically becomes
Intel's bedfellow on the basis of a compelling roadmap.
That roadmap has to be pretty darned interesting.

Intel claims they aren't developing Hyperthreading anymore.
But Intel now knows all the issues involved in hw threading.
Why not exploit that know-how as an advantage over AMD?
AMD has only a fraction of the resources that Intel has,
so AMD will have a hard time catching up

My speculation is that Intel will build on their HyperThread experience
to design a "parallel x86". x86 CPUs have become superscalar machines.
The next evolutionary step is a parallel machine. Dual-cores are only
an inefficient stop-gap design that wastes transistors with duplicated
or unnecessary resources (eg coherency logic between the core's caches).

My ideas for a parallel x86:

- thread quantums

The idea is to move coarse-granularity timer-driven time-slicing
into the hw so that time-slices can be instruction-granular.

- thread prioritization

The OS assigns static priorities to threads.
The hw computes dynamic priorites according to static priority
and instruction issue for a thread per quantum.

- sub-threads

Support for parallel programming.
A reduced 80386 Task-State Segment (TSS) will be defined
(avoid saving unnecessary registers such as ES/FS/GS)
A variant of JUMP [TSS] with a new Thread bit defined in the TSS
will spawn a sub-thread (analoguous to a UNIX child process).
The sub-thread can stop by IRET [TSS].
A new WAIT [TSS] will synchronize the parent with its sub-thread.

- thread exceptions

A thread can raise exceptions to end or suspend itself.

- cache lines have thread bits in addition to LRU bits

When one cache line has to be evicted, victimize the line owned
by a lower-priorty thread.

- ALUs: 8 simple, 4 complex.

- FPUs: 4 FADD, 4 FMUL, 2 FLDST.

- deprecation of FP SIMD instruction set

SIMD was a good idea for a single-thread CPU as it let the control unit
issue a single-instruction for multiple-data without resource hazards.
But a multi-threaded control unit would function optimally with
a wide window of decomposed (SISD) instructions.
Anonymous
a b à CPUs
September 29, 2005 6:41:42 PM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips (More info?)

ref:
http://www.garlic.com/~lynn/2005q.html#46

for arcane references ... 8100
http://en.wikipedia.org/wiki/IBM_8100

rp3 reference ... even slightly on-topic with respect to
parallel operation
http://www-03.ibm.com/ibm/history/exhibits/vintage/vint...

of course with respect to earlier posts
http://www.garlic.com/~lynn/95.html#13
http://www.garlic.com/~lynn/2005q.html#38

the part of having stuff transferred to kingston for numerical
intensive and being told we couldn't work on anything with more than
four processors ... may have had some leftover issues because of RP3
.... in addition to the issue of encroaching on industrial strength
commercial data processing.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
Anonymous
a b à CPUs
September 29, 2005 7:42:37 PM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips (More info?)

Anne & Lynn Wheeler wrote:
> nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
>
>>A question for all you omniscient ones out there - what were the
>>worst computers of all time? IBM's candidate must surely be the
>>PC/RT, but A,T&T are strong competitors with the 3B2.
>
>
> i wouldn't consider it even close to the 8100.
>
snip

I never could understand why 8100 existed. Was it a descendent of the
UC family? UC1 UC.5 etc?

--
Del Cecchi
"This post is my own and doesn’t necessarily represent IBM’s positions,
strategies or opinions.”
Related resources
Anonymous
a b à CPUs
September 30, 2005 1:04:03 AM

Archived from groups: comp.arch,comp.sys.intel,comp.sys.ibm.pc.hardware.chips (More info?)

"Nick Maclaren" <nmm1@cus.cam.ac.uk> wrote in message
news:D hhbgg$o22$1@gemini.csx.cam.ac.uk...
> In article <EKU_e.91702$qY1.42252@bgtnsc04-news.ops.worldnet.att.net>,
> Stephen Fuld <s.fuld@PleaseRemove.att.net> wrote:
>>
>>
>>Also, you seem not to mention what is truely the largest component of
>>"commercial workloads", the systems that actually run the businesses.
>>These
>>are systems that keep the accounts for banks and insurance companies, that
>>handle sales and inventory for retail stores and distributers, make
>>reservations for airlines, rental car companies and hotels, handle pretty
>>much everyones payroll. etc.
>
> I didn't mention it, because it wasn't relevant. Back in the days
> of the 80386, no serious company used an IBM PC for that! Intel's
> second success was breaking into that market, but that came after
> the PowerPC had failed.
>
>>> "Scientific/technical" includes most conventional programming,
>>
>>I disagree with the word "most". There are far more "commercial"
>>programmers than "scientific" ones, they write far more programs that
>>consume far more total CPU cycles.
>
> What those people do can't really be called conventional programming,
> and quite a lot of the languages they use aren't even Turing complete
> (ignoring finiteness restrictions). The conventional programming for
> the "commercial" systems is done by a fairly small number of people
> (e.g. the people who develop Oracle), and the vast number use those
> higher-level programs.

Nonesense. Totally false distinction, and smells of academic elitism.

Whether a language is Turing complete or not is of zero interest.
The only thing of interest is whether the language will do the job at hand.
A language is just a tool. Programming is programming.
Is it only "real programmers, named Mel" who do "conventional
programming"? And what the heck does "conventional" mean
in any case? Plugboards? Loom cards?

> I can witness that IBM used to regard the actual programming of even
> some of the most "commercial" codes as a "scientific/technical"
> activity :-)

Of course. The discipline lives in science not in religion.

--

... Hank

http://home.earthlink.net/~horedson
http://home.earthlink.net/~w0rli
Anonymous
a b à CPUs
September 30, 2005 1:07:33 AM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips (More info?)

"Nick Maclaren" <nmm1@cus.cam.ac.uk> wrote in message
news:D hhbtm$ots$1@gemini.csx.cam.ac.uk...
> In article <m3vf0j6dv7.fsf@lhwlinux.garlic.com>,
> Anne & Lynn Wheeler <lynn@garlic.com> wrote:
>>Del Cecchi <cecchinospam@us.ibm.com> writes:
>>> Romp and rios were two different things as I recall. Although they say
>>> memory is second thing to go. As for Romp, what do you expect from a
>>> processor designed in Yorktown. :-)
>>
>>romp was 16bit processor that was supposed to be for the displaywriter
>>follow-on ... it was only after the project got killed ... that
>>somebody notice that you could port unix to any chip and call it
>>a unix workstation ... previous post
>>http://www.garlic.com/~lynn/2005q.html#38 Intel strikes back wtih a
>>parallel x86 design
>>
>>they had hired the group that had done the AT&T port to the ibm/pc for
>>pc/ix ... to do one for (office product division displaywriter) romp.
>
> A question for all you omniscient ones out there - what were the
> worst computers of all time? IBM's candidate must surely be the
> PC/RT, but A,T&T are strong competitors with the 3B2.


Univac 1110 with the 40MB disks and too small drum (FH-432) for swap.
Running early version of OS-1100 in a time-sharing environment.
Uptime tended to be measured in minutes.
"Hey, I managed to get logged in before it crashed again."

--

... Hank

http://home.earthlink.net/~horedson
http://home.earthlink.net/~w0rli
Anonymous
a b à CPUs
September 30, 2005 1:51:06 AM

Archived from groups: comp.arch,comp.sys.intel,comp.sys.ibm.pc.hardware.chips (More info?)

In article <7FY_e.4683$zQ3.3248@newsread1.news.pas.earthlink.net>,
Hank Oredson <horedson@earthlink.net> wrote:
>>
>>>> "Scientific/technical" includes most conventional programming,
>>>
>>>I disagree with the word "most". There are far more "commercial"
>>>programmers than "scientific" ones, they write far more programs that
>>>consume far more total CPU cycles.
>>
>> What those people do can't really be called conventional programming,
>> and quite a lot of the languages they use aren't even Turing complete
>> (ignoring finiteness restrictions). The conventional programming for
>> the "commercial" systems is done by a fairly small number of people
>> (e.g. the people who develop Oracle), and the vast number use those
>> higher-level programs.
>
>Nonesense. Totally false distinction, and smells of academic elitism.

So what? I wasn't saying that I agreed with the distinction. I was
explaining how those companies use the term, and how they orient
their plans around it. Why ON EARTH do you think that I, as an
academic, necessarily agree with everything I describe?

For heaven's sake, do you think that every mediaeval historian who
describes the viewpoint of the Inquisition AGREES with burning
people at the stake?

>Whether a language is Turing complete or not is of zero interest.
>The only thing of interest is whether the language will do the job at hand.
>A language is just a tool. Programming is programming.
>Is it only "real programmers, named Mel" who do "conventional
>programming"? And what the heck does "conventional" mean
>in any case? Plugboards? Loom cards?

Sigh. Do PLEASE read what I say. I was making no value judgments,
but merely commenting that such uses are not CONVENTIONAL programming.
They aren't. It wasn't long ago that they weren't included in
programming at all - and I am NOT, repeat NOT, referring to only
academic use of the word.

I used that description to try and explain the difference in the
categories that was, and to a large extent still is, used by large
companies, like IBM and Intel.

My point was and is SOLELY that they categorise MOST conventional
programming of the type that I was describing as a scientific/
technical activity. That is ALL - get it? - ALL.


Regards,
Nick Maclaren.
Anonymous
a b à CPUs
September 30, 2005 1:51:07 AM

Archived from groups: comp.arch,comp.sys.intel,comp.sys.ibm.pc.hardware.chips (More info?)

nmm1@cus.cam.ac.uk (Nick Maclaren) writes:

> In article <7FY_e.4683$zQ3.3248@newsread1.news.pas.earthlink.net>,
> Hank Oredson <horedson@earthlink.net> wrote:
> >>
> >>>> "Scientific/technical" includes most conventional programming,
> >>>
> >>>I disagree with the word "most". There are far more "commercial"
> >>>programmers than "scientific" ones, they write far more programs that
> >>>consume far more total CPU cycles.
> >>
> >> What those people do can't really be called conventional programming,
> >> and quite a lot of the languages they use aren't even Turing complete
> >> (ignoring finiteness restrictions). The conventional programming for
> >> the "commercial" systems is done by a fairly small number of people
> >> (e.g. the people who develop Oracle), and the vast number use those
> >> higher-level programs.
> >
> >Nonesense. Totally false distinction, and smells of academic elitism.
>
> So what? I wasn't saying that I agreed with the distinction. I was
> explaining how those companies use the term, and how they orient
> their plans around it. Why ON EARTH do you think that I, as an
> academic, necessarily agree with everything I describe?

If you're going to present a distinction, you need to either present
it as being somebody else's or expect people to think you agree with
it.

> For heaven's sake, do you think that every mediaeval historian who
> describes the viewpoint of the Inquisition AGREES with burning
> people at the stake?

If the historian presented the bald statement, "witches should be
burnt at the stake", then yes I'd think they agreed.

> >Whether a language is Turing complete or not is of zero interest.
> >The only thing of interest is whether the language will do the job at hand.
> >A language is just a tool. Programming is programming.
> >Is it only "real programmers, named Mel" who do "conventional
> >programming"? And what the heck does "conventional" mean
> >in any case? Plugboards? Loom cards?
>
> Sigh. Do PLEASE read what I say. I was making no value judgments,
> but merely commenting that such uses are not CONVENTIONAL programming.
> They aren't. It wasn't long ago that they weren't included in
> programming at all - and I am NOT, repeat NOT, referring to only
> academic use of the word.

I did read what you said. You appear to be expecting us to read what
you meant, which can be much harder.

> I used that description to try and explain the difference in the
> categories that was, and to a large extent still is, used by large
> companies, like IBM and Intel.
>
> My point was and is SOLELY that they categorise MOST conventional
> programming of the type that I was describing as a scientific/
> technical activity. That is ALL - get it? - ALL.

Then that is what you should have said.
--
Joseph J. Pfeiffer, Jr., Ph.D. Phone -- (505) 646-1605
Department of Computer Science FAX -- (505) 646-1002
New Mexico State University http://www.cs.nmsu.edu/~pfeiffer
skype: jjpfeifferjr
Anonymous
a b à CPUs
September 30, 2005 1:58:20 AM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips (More info?)

In article <pIY_e.4685$zQ3.161@newsread1.news.pas.earthlink.net>,
Hank Oredson <horedson@earthlink.net> wrote:
>
>Uptime tended to be measured in minutes.
>"Hey, I managed to get logged in before it crashed again."

From what I have included, how would you tell what it was?


Regards,
Nick Maclaren.
September 30, 2005 2:13:26 AM

Archived from groups: comp.arch,comp.sys.intel,comp.sys.ibm.pc.hardware.chips (More info?)

On Thu, 29 Sep 2005 17:05:48 +0000, Bill Davidsen wrote:

> keith wrote:
>> On Tue, 27 Sep 2005 16:31:53 +0000, Bill Davidsen wrote:
>>
>>
>>>YKhan wrote:
>>>
>>>>Chris Stiles wrote:
>>>>
>>>>
>>>>>"YKhan" <yjkhan@gmail.com> writes:
>>>>>
>>>>>
>>>>>>was in fact pushing it at that time. IBM was also fully behind the
>>>>>>later 386.
>>>>>
>>>>>IBM was only behind the 386 to the extent that it didn't cannibalise any of
>>>>>their existing products markets.
>>>>>
>>>>
>>>>
>>>>Which market exactly was the 386 going to cannibalize? The 286 market?
>>>
>>>The minicomputer market.
>>
>>
>> IBM didn't own the minicomputer market in the early '80s. DEC did. IBM
>> captured a good chunk with the AS/400, but that was because the software
>> was better. There was no OS/400 running on x86.
>>
>>
>>>The rack mount server of the mid-90's was the
>>>R20 and R30, which did not compete well against the 386/496 in terms of
>>>compute power. They were used because they had a large address space and
>>>lots of i/o bandwidth, and because there was a server OS (AIX) for them.
>>
>>
>> ...and IBM still sold *tons* of '386/'486 boxen. Go figure.
>>
>>
>>>By 2000 IBM was offering rack mount Intel systems, ostensibly for
>>>Windows server, but many running that "toy OS" Linux.
>>
>>
>> 2000? Gee, I thought the '386 was out a tad before that.
>>
> Got a model number for an IBM rackmount server using the 386? We sure
> never saw such a thing. You manage (as usual) to disparage without
> provide a single verifyable fact.

Who gives a rat's ass about the sheet-metal or the logo on the front?
A '386 is a '386! Sheesh!

--
Keith
Anonymous
a b à CPUs
September 30, 2005 2:27:36 AM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips (More info?)

"Hank Oredson" <horedson@earthlink.net> wrote in message
news:p IY_e.4685$zQ3.161@newsread1.news.pas.earthlink.net...
> "Nick Maclaren" <nmm1@cus.cam.ac.uk> wrote in message
> news:D hhbtm$ots$1@gemini.csx.cam.ac.uk...
>> In article <m3vf0j6dv7.fsf@lhwlinux.garlic.com>,
>> Anne & Lynn Wheeler <lynn@garlic.com> wrote:
>>>Del Cecchi <cecchinospam@us.ibm.com> writes:
>>>> Romp and rios were two different things as I recall. Although they say
>>>> memory is second thing to go. As for Romp, what do you expect from a
>>>> processor designed in Yorktown. :-)
>>>
>>>romp was 16bit processor that was supposed to be for the displaywriter
>>>follow-on ... it was only after the project got killed ... that
>>>somebody notice that you could port unix to any chip and call it
>>>a unix workstation ... previous post
>>>http://www.garlic.com/~lynn/2005q.html#38 Intel strikes back wtih a
>>>parallel x86 design
>>>
>>>they had hired the group that had done the AT&T port to the ibm/pc for
>>>pc/ix ... to do one for (office product division displaywriter) romp.
>>
>> A question for all you omniscient ones out there - what were the
>> worst computers of all time? IBM's candidate must surely be the
>> PC/RT, but A,T&T are strong competitors with the 3B2.
>
>
> Univac 1110 with the 40MB disks and too small drum (FH-432) for swap.

You could have used the FH-1782, which at 17.82 ms access time was still
much faster than and disks of the day. It had IIRC 8 times the capacity of
the FH-432. As for disks, wasn't the 8440 out by then? They were about 110
MB.

The big problem with the 1110 was the two level main memory, especially
since the primary was plated wire technology, whcih never worked well at
all.

> Running early version of OS-1100 in a time-sharing environment.

The first level of OS-1100 that ran the 1110 was level 30, so it had been
out a while. But I agree that stability was a problem, especially when you
pushed it. It got a lot better by level 36.

--
- Stephen Fuld
e-mail address disguised to prevent spam
Anonymous
a b à CPUs
September 30, 2005 2:30:25 AM

Archived from groups: comp.arch,comp.sys.intel,comp.sys.ibm.pc.hardware.chips (More info?)

"Nick Maclaren" <nmm1@cus.cam.ac.uk> wrote in message
news:D hhnka$i0j$1@gemini.csx.cam.ac.uk...

snip

> My point was and is SOLELY that they categorise MOST conventional
> programming of the type that I was describing as a scientific/
> technical activity. That is ALL - get it? - ALL.

OK. I am trying to understand. Would you say that someone who programs a
payroll application in COBOL, regardless of whether it uses Oracle or tape
files for its input is engaging in a "scientific/technical activity"?

--
- Stephen Fuld
e-mail address disguised to prevent spam
!