Sign in with
Sign up | Sign in
Your question

65nm news from Intel

Tags:
  • CPUs
  • Hardware
  • Intel
  • IBM
  • Product
Last response: in CPUs
Share
Anonymous
a b à CPUs
August 30, 2004 9:39:10 AM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

http://www.reuters.com/locales/c_newsArticle.jsp?type=t...

Yousuf Khan

More about : 65nm news intel

August 30, 2004 9:39:11 AM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

"Yousuf Khan" <bbbl67@ezrs.com> wrote in message news:<24zYc.102338$UTP.50876@twister01.bloor.is.net.cable.rogers.com>...
> http://www.reuters.com/locales/c_newsArticle.jsp?type=t...
>
> Yousuf Khan

official press release,

http://crew.tweakers.net/Wouter/Press65nm804a.pdf

more publicity,

http://www.extremetech.com/article2/0,1558,1640647,00.a...
http://news.com.com/Intel+to+throttle+power+by+enhancin...
http://cbs.marketwatch.com/news/story.asp?guid=%7BE706E...

Looks damn good on paper.
Anonymous
a b à CPUs
August 30, 2004 10:54:23 PM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

"Yousuf Khan" <bbbl67@ezrs.com> wrote in message
news:24zYc.102338$UTP.50876@twister01.bloor.is.net.cable.rogers.com...
> http://www.reuters.com/locales/c_newsArticle.jsp?type=t...
>
> Yousuf Khan
>
>

I don't know, maybe it's just me but it seems like this article puts way to
much importance on the manufacturing process a CPU is made on.. Not that
these things aren't important at all... But the fact that my Athlon64 3000+
is still made on a .13 process really didn't discourage me at all.. My
system still performs extremely well despite being a "generation behind"
Intel's Prescott.

Carlo
Related resources
August 30, 2004 11:03:00 PM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

It looks like AMD is progressing nicely with .09
This website shows the Athlon 64 4000+ and 3800+ as
well as the FX-55 as scheduled for release in October.

http://www.c627627.com/AMD/Athlon64/

Mobile Athlon 64 chips for thin and light notebooks are
being made now on .09

Carlo Razzeto wrote:

> "Yousuf Khan" <bbbl67@ezrs.com> wrote in message
> news:24zYc.102338$UTP.50876@twister01.bloor.is.net.cable.rogers.com...
> > http://www.reuters.com/locales/c_newsArticle.jsp?type=t...
> >
> > Yousuf Khan
> >
> >
>
> I don't know, maybe it's just me but it seems like this article puts way to
> much importance on the manufacturing process a CPU is made on.. Not that
> these things aren't important at all... But the fact that my Athlon64 3000+
> is still made on a .13 process really didn't discourage me at all.. My
> system still performs extremely well despite being a "generation behind"
> Intel's Prescott.
>
> Carlo
Anonymous
a b à CPUs
August 31, 2004 2:47:37 AM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

On Mon, 30 Aug 2004 18:54:23 -0400, "Carlo Razzeto"
<crazzeto@hotmail.com> wrote:
>
>"Yousuf Khan" <bbbl67@ezrs.com> wrote in message
>news:24zYc.102338$UTP.50876@twister01.bloor.is.net.cable.rogers.com...
>> http://www.reuters.com/locales/c_newsArticle.jsp?type=t...
>>
>
>I don't know, maybe it's just me but it seems like this article puts way to
>much importance on the manufacturing process a CPU is made on.. Not that
>these things aren't important at all... But the fact that my Athlon64 3000+
>is still made on a .13 process really didn't discourage me at all.. My
>system still performs extremely well despite being a "generation behind"
>Intel's Prescott.

The important difference is that Athlon64 3000+ costs AMD more to
build than Intel's Prescott 3.0GHz chips, yet sells for less. New
process generation is equally one part technology, one part financial
these days (case-in-point, Intel is very aggressively moving the
low-end Celeron to the newest manufacturing product rather than just
focusing on high-end chips first).

-------------
Tony Hill
hilla <underscore> 20 <at> yahoo <dot> ca
August 31, 2004 3:02:24 AM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

JK wrote:

> It looks like AMD is progressing nicely with .09
> This website shows the Athlon 64 4000+ and 3800+

The 3800+ on .09 that is. The 3800+ on .13 was released earlier.

> as
> well as the FX-55 as scheduled for release in October.
>
> http://www.c627627.com/AMD/Athlon64/
>
> Mobile Athlon 64 chips for thin and light notebooks are
> being made now on .09
>
> Carlo Razzeto wrote:
>
> > "Yousuf Khan" <bbbl67@ezrs.com> wrote in message
> > news:24zYc.102338$UTP.50876@twister01.bloor.is.net.cable.rogers.com...
> > > http://www.reuters.com/locales/c_newsArticle.jsp?type=t...
> > >
> > > Yousuf Khan
> > >
> > >
> >
> > I don't know, maybe it's just me but it seems like this article puts way to
> > much importance on the manufacturing process a CPU is made on.. Not that
> > these things aren't important at all... But the fact that my Athlon64 3000+
> > is still made on a .13 process really didn't discourage me at all.. My
> > system still performs extremely well despite being a "generation behind"
> > Intel's Prescott.
> >
> > Carlo
Anonymous
a b à CPUs
August 31, 2004 3:55:47 AM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

"Tony Hill" <hilla_nospam_20@yahoo.ca> wrote in message
news:r5n7j0ljl4vhlnr2t1nmfmdklpbgf62f6p@4ax.com...
> On Mon, 30 Aug 2004 18:54:23 -0400, "Carlo Razzeto"
> <crazzeto@hotmail.com> wrote:
>
> The important difference is that Athlon64 3000+ costs AMD more to
> build than Intel's Prescott 3.0GHz chips, yet sells for less. New
> process generation is equally one part technology, one part financial
> these days (case-in-point, Intel is very aggressively moving the
> low-end Celeron to the newest manufacturing product rather than just
> focusing on high-end chips first).
>
> -------------
> Tony Hill
> hilla <underscore> 20 <at> yahoo <dot> ca

This I realize and I'm not trying to take that away... I'm just saying that
if I didn't know any better and I were to read the article I might tend to
automatically assume that a .13 chip is worse than a .09 chip etc.... When
the truth is the manufacturing process is not really going to have a huge
impact in performance (unless of course it means they can get more MHz out
of it).

Carlo
Anonymous
a b à CPUs
August 31, 2004 4:53:30 AM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

Carlo Razzeto wrote:
> I don't know, maybe it's just me but it seems like this article puts
> way to much importance on the manufacturing process a CPU is made
> on.. Not that these things aren't important at all... But the fact
> that my Athlon64 3000+ is still made on a .13 process really didn't
> discourage me at all.. My system still performs extremely well
> despite being a "generation behind" Intel's Prescott.

Shhh! Intel needs a little bit of a pick-me-up. Let it enjoy its usual
fawning coverage, like from yesteryear. :-)

Yousuf Khan
Anonymous
a b à CPUs
August 31, 2004 6:41:55 AM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

On Mon, 30 Aug 2004 23:55:47 -0400, "Carlo Razzeto"
<crazzeto@hotmail.com> wrote:
>
>"Tony Hill" <hilla_nospam_20@yahoo.ca> wrote in message
>news:r5n7j0ljl4vhlnr2t1nmfmdklpbgf62f6p@4ax.com...
>> On Mon, 30 Aug 2004 18:54:23 -0400, "Carlo Razzeto"
>> <crazzeto@hotmail.com> wrote:
>>
>> The important difference is that Athlon64 3000+ costs AMD more to
>> build than Intel's Prescott 3.0GHz chips, yet sells for less. New
>> process generation is equally one part technology, one part financial
>> these days (case-in-point, Intel is very aggressively moving the
>> low-end Celeron to the newest manufacturing product rather than just
>> focusing on high-end chips first).
>
>This I realize and I'm not trying to take that away... I'm just saying that
>if I didn't know any better and I were to read the article I might tend to
>automatically assume that a .13 chip is worse than a .09 chip etc.... When
>the truth is the manufacturing process is not really going to have a huge
>impact in performance (unless of course it means they can get more MHz out
>of it).

Well, until very recently a new manufacturing processes DID mean that
they could get more MHz out of it, usually quite a bit more MHz. On
the old 180nm process the P4 struggled to reach 2.0GHz, while on the
130nm process Intel has managed to push the chip up to 3.4GHz.
Previously the gains were even larger, with the 250nm PIII topping out
at 600MHz and the 180nm eventually managing 1.13GHz.

However the new 90nm fab process has maybe thrown this automatic
assumption of much higher clock speeds into question, at least for the
time being. Intel's still having trouble getting the "Prescott" P4 up
to 3.6GHz and have pushed back the release date of their 3.8 and
4.0GHz P4 chips multiple times. This might just be a specific
situation, as the Prescott is a VERY different chip from the
Northwood, beyond simply the process shrink, however IBM doesn't seem
to be too much better with their PowerPC chips. The PPC 970 (130nm)
made it to 2.0GHz and might have had some headroom left, while
currently IBM is struggling to get decent production on the 2.5GHz PPC
970FX (90nm).


So... err.. what was the point I was trying to get at here again?!
Ohh yeah, I think I'm basically agreeing with you :>

-------------
Tony Hill
hilla <underscore> 20 <at> yahoo <dot> ca
Anonymous
a b à CPUs
August 31, 2004 11:54:52 AM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

This is just extra publicity for what has already been
known for months, ie the drive to 65nm is on a fast
pace, things are looking good, much more straining of
silicon, better internal power management, etc. The really
exciting transistor designs will happen at 45nm, using the high-k
interconnects. Though that's still three years away. And there
is interesting research going on at 15nm, for the next decade.

What's not known is exactly how Intel is going to design
the silicon. How are the multiple cores going to work, especially
with the one bus? Even more significantly, how are applications going
to benefit from the 2+ cores; are they going to have to explicitly
code multiple-threading to benefit, which afterall ain't easy to pull off,
or will the feeding of the multiple cores be handled effectively by the
compilers,
or may be even the OS? I see that Intel has released a thread checking
tool, hopefully MS incorporates something like it in their next Studio.

So far, looks like the new upcoming multi-core chip designs will depend heavily
on how applications are developed, more so than ever before. We
already saw some of this with the branch-predictors, the results
weren't impressive at all. If the thread related logic issues can't somehow be
handled at the tool, OS, compiler, or chip level, then it's going to be a long
day reaping the full potential of 2+ cores. 2+ cores may end up like the
386, full of potential but not enough software support.



"Yousuf Khan" <bbbl67@ezrs.com> wrote in message
news:24zYc.102338$UTP.50876@twister01.bloor.is.net.cable.rogers.com...
>
http://www.reuters.com/locales/c_newsArticle.jsp?type=t...
>
> Yousuf Khan
>
>
Anonymous
a b à CPUs
August 31, 2004 1:17:50 PM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

In article <g9WYc.3239$OQ6.1732@trnddc09>, "Raymond" <no@all.net> writes:
|> This is just extra publicity for what has already been
|> known for months, ie the drive to 65nm is on a fast
|> pace, things are looking good, much more straining of
|> silicon, better internal power management, etc. The really
|> exciting transistor designs will happen at 45nm, using the high-k
|> interconnects. Though that's still three years away. And there
|> is interesting research going on at 15nm, for the next decade.

Oh, really? I did a quick Web search, but couldn't find when
the comparable announcement was made for 90 nm. I vaguely
remember mid-2001, which was a little matter of 3 years before
90 nm hit the streets in quantity.

If my recollection is correct, it isn't looking good at all for
65 nm, as the passive leakage problems are even worse. Mid-2007
for mass production isn't what Intel are hoping for (or claiming),
but IS what ITRS are predicting ....

I shall not be holding my breath for 65 nm; you are welcome to
hold yours for it :-)


Regards,
Nick Maclaren.
Anonymous
a b à CPUs
August 31, 2004 1:17:51 PM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

Nick Maclaren wrote:
> If my recollection is correct, it isn't looking good at all for
> 65 nm, as the passive leakage problems are even worse. Mid-2007
> for mass production isn't what Intel are hoping for (or claiming),
> but IS what ITRS are predicting ....

If you read the article, the statement is that leakage is dealt with to
a degree by straining the silicon lattice. I don't know how much that
changes things, but they want us to think it solves the problem (which
it probably doesn't).

I thought 2005 was too soon for 65nm, but that's what I read. That
Pentium 4 will be shipping in 2005 on 65nm. Which, thankfully, gives
that embarrassment that is Prescott just one year of life.

Alex
--
My words are my own. They represent no other; they belong to no other.
Don't read anything into them or you may be required to compensate me
for violation of copyright. (I do not speak for my employer.)
August 31, 2004 6:03:34 PM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

"Raymond" <no@all.net> wrote :

[cut]

> reaping the full potential of 2+ cores. 2+ cores may end up like
> the 386, full of potential but not enough software support.

yes, like all the rest of SMP boxes, obsolete and unsupported ...

Pozdrawiam.
--
RusH //
http://randki.o2.pl/profil.php?id_r=352019
Like ninjas, true hackers are shrouded in secrecy and mystery.
You may never know -- UNTIL IT'S TOO LATE.
Anonymous
a b à CPUs
August 31, 2004 6:37:13 PM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

In article <ch1rtu$4hc$1@news01.intel.com>,
Alex Johnson <compuwiz@jhu.edu> writes:
|>
|> If you read the article, the statement is that leakage is dealt with to
|> a degree by straining the silicon lattice. I don't know how much that
|> changes things, but they want us to think it solves the problem (which
|> it probably doesn't).

One of the most reliable sources in the industry has told me that
it doesn't. Yes, it helps, but only somewhat.

|> I thought 2005 was too soon for 65nm, but that's what I read. That
|> Pentium 4 will be shipping in 2005 on 65nm. Which, thankfully, gives
|> that embarrassment that is Prescott just one year of life.

If you believe that ordinary customers will be able to buy 65 nm
Pentium 4s at commodity prices in mid-2005, I have this bridge for
sale ....


Regards,
Nick Maclaren.
Anonymous
a b à CPUs
August 31, 2004 8:05:08 PM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

On Tue, 31 Aug 2004 02:41:55 -0400, Tony Hill
<hilla_nospam_20@yahoo.ca> wrote:

>However the new 90nm fab process has maybe thrown this automatic
>assumption of much higher clock speeds into question, at least for the
>time being. Intel's still having trouble getting the "Prescott" P4 up
>to 3.6GHz and have pushed back the release date of their 3.8 and
>4.0GHz P4 chips multiple times.

As I understand it, you could indeed hit, say, 5 GHz with a 90 nm
process (and Prescott's design - longer pipeline, etc - indicates
Intel were hoping to do just that), except that the chip would melt?

--
"Sore wa himitsu desu."
To reply by email, remove
the small snack from address.
Anonymous
a b à CPUs
August 31, 2004 8:23:33 PM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

In article <4134a157.322981650@news.eircom.net>,
wallacethinmintr@eircom.net (Russell Wallace) writes:
|> On Tue, 31 Aug 2004 02:41:55 -0400, Tony Hill
|> <hilla_nospam_20@yahoo.ca> wrote:
|>
|> >However the new 90nm fab process has maybe thrown this automatic
|> >assumption of much higher clock speeds into question, at least for the
|> >time being. Intel's still having trouble getting the "Prescott" P4 up
|> >to 3.6GHz and have pushed back the release date of their 3.8 and
|> >4.0GHz P4 chips multiple times.
|>
|> As I understand it, you could indeed hit, say, 5 GHz with a 90 nm
|> process (and Prescott's design - longer pipeline, etc - indicates
|> Intel were hoping to do just that), except that the chip would melt?

I am pretty sure that Intel could cool the chip, even at that speed.
A factory-fitted silver heatsink, with high-speed water-cooling to
a heat exchanger in front of a large and fast fan, bolted into a
heavy chassis, should do the job.

As a demonstration of virtuosity, it would be excellent. As a
system to sell in large numbers, perhaps not.


Regards,
Nick Maclaren.
Anonymous
a b à CPUs
September 1, 2004 8:01:59 AM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

"Nick Maclaren" <nmm1@cus.cam.ac.uk> wrote in message
news:ch1fnu$9vv$1@pegasus.csx.cam.ac.uk...
>
> In article <g9WYc.3239$OQ6.1732@trnddc09>, "Raymond" <no@all.net> writes:
> |> This is just extra publicity for what has already been
> |> known for months, ie the drive to 65nm is on a fast
> |> pace, things are looking good, much more straining of
> |> silicon, better internal power management, etc. The really
> |> exciting transistor designs will happen at 45nm, using the high-k
> |> interconnects. Though that's still three years away. And there
> |> is interesting research going on at 15nm, for the next decade.
>
> Oh, really? I did a quick Web search, but couldn't find when
> the comparable announcement was made for 90 nm. I vaguely
> remember mid-2001, which was a little matter of 3 years before
> 90 nm hit the streets in quantity.

If you read exactly what Intel said after they achieved 90nm
SRAM, they weren't anywhere as rosy as they are now with
65nm.

> If my recollection is correct, it isn't looking good at all for
> 65 nm, as the passive leakage problems are even worse. Mid-2007
> for mass production isn't what Intel are hoping for (or claiming),
> but IS what ITRS are predicting ....
>
> I shall not be holding my breath for 65 nm; you are welcome to
> hold yours for it :-)

I am holding my breath! :-)
Anonymous
a b à CPUs
September 1, 2004 8:01:59 AM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

"Nick Maclaren" <nmm1@cus.cam.ac.uk> wrote in message
news:ch22ep$qkl$1@pegasus.csx.cam.ac.uk...
>
> In article <ch1rtu$4hc$1@news01.intel.com>,
> Alex Johnson <compuwiz@jhu.edu> writes:
> |>
> |> If you read the article, the statement is that leakage is dealt with to
> |> a degree by straining the silicon lattice. I don't know how much that
> |> changes things, but they want us to think it solves the problem (which
> |> it probably doesn't).
>
> One of the most reliable sources in the industry has told me that
> it doesn't. Yes, it helps, but only somewhat.
>
> |> I thought 2005 was too soon for 65nm, but that's what I read. That
> |> Pentium 4 will be shipping in 2005 on 65nm. Which, thankfully, gives
> |> that embarrassment that is Prescott just one year of life.
>
> If you believe that ordinary customers will be able to buy 65 nm
> Pentium 4s at commodity prices in mid-2005, I have this bridge for
> sale ....

What they're saying is first production in 2005, and high volume by
2006, perhaps even high enough to overtake that of 90nm.
Anonymous
a b à CPUs
September 1, 2004 12:34:41 PM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

In article <XQbZc.44$A63.6@trnddc09>, Raymond <no@all.net> wrote:
>>
>> Oh, really? I did a quick Web search, but couldn't find when
>> the comparable announcement was made for 90 nm. I vaguely
>> remember mid-2001, which was a little matter of 3 years before
>> 90 nm hit the streets in quantity.
>
>If you read exactly what Intel said after they achieved 90nm
>SRAM, they weren't anywhere as rosy as they are now with
>65nm.

I need to correct what I said - it was 2 years. March 2002.

Actually, I remember them being every bit as optimistic. Anyway,
such claims are worth almost as much as the hot air that carries
them.

>> I shall not be holding my breath for 65 nm; you are welcome to
>> hold yours for it :-)
>
>I am holding my breath! :-)

You have better lungs than I do :-)


Regards,
Nick Maclaren.
Anonymous
a b à CPUs
September 1, 2004 12:39:19 PM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

In article <XQbZc.45$A63.43@trnddc09>, Raymond <no@all.net> wrote:
>"Nick Maclaren" <nmm1@cus.cam.ac.uk> wrote in message
>news:ch22ep$qkl$1@pegasus.csx.cam.ac.uk...
>> In article <ch1rtu$4hc$1@news01.intel.com>,
>> Alex Johnson <compuwiz@jhu.edu> writes:
>> |>
>> |> I thought 2005 was too soon for 65nm, but that's what I read. That
>> |> Pentium 4 will be shipping in 2005 on 65nm. Which, thankfully, gives
>> |> that embarrassment that is Prescott just one year of life.
>>
>> If you believe that ordinary customers will be able to buy 65 nm
>> Pentium 4s at commodity prices in mid-2005, I have this bridge for
>> sale ....
>
>What they're saying is first production in 2005, and high volume by
>2006, perhaps even high enough to overtake that of 90nm.

Even if that were so, it would give Prescott a lot more than a year
to hold the fort.

Anyway, once upon a time when knights were bold and press statements
were intended to convey information, "production" meant the delivery
of products, and "products" meant goods sold to ordinary customers.
At least in this context.

Yes, I believe that Intel (and IBM) will be able to make 65 nm CPUs
in early 2005, perhaps even late 2004. But small numbers of ones
made for testing does not constitute production in any meaningful
sense.

Regards,
Nick Maclaren.
Anonymous
a b à CPUs
September 1, 2004 12:54:09 PM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

gaf1234567890@hotmail.com (G) writes:
> Every version of Windows based on NT (NT, 2000, XP, Server 2k3,
> Longhorn, etc) has gotten progressively better at utilizing multiple
> CPU's. MS keeps tweaking things to a finer level of granularity. So
> minimally, a single threaded application could still hog 1 CPU, but
> at least the OS underneath will do it's best to make use of the
> other CPU.

long ago and far away i was told that the people in beaverton had done
quite a bit of the NT smp work ... since all they had was smp (while
redmond concentrated on their primary customer base ... which was
mostly all non-smp).

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
Anonymous
a b à CPUs
September 1, 2004 2:40:00 PM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

> I am pretty sure that Intel could cool the chip, even at that speed.
> A factory-fitted silver heatsink, with high-speed water-cooling to
> a heat exchanger in front of a large and fast fan, bolted into a
> heavy chassis, should do the job.

A heat pipe is better at moving heat than any solid material, and quite
easy to use.

Dumping all those watts in the environment, absent water cooling, is more
of a problem. I'd rather not have several hundred watts heating the air in
my office, thank you.

Jan
Anonymous
a b à CPUs
September 1, 2004 2:40:01 PM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

In article <2plg70Fm26psU1@uni-berlin.de>,
=?ISO-8859-1?Q?Jan_Vorbr=FCggen?= <jvorbrueggen-not@mediasec.de> wrote:
>> I am pretty sure that Intel could cool the chip, even at that speed.
>> A factory-fitted silver heatsink, with high-speed water-cooling to
>> a heat exchanger in front of a large and fast fan, bolted into a
>> heavy chassis, should do the job.
>
>A heat pipe is better at moving heat than any solid material, and quite
>easy to use.

Hang on - I never said that the silver heatsink was solid! It should
be silver for the conductivity and resistance to corrosion, but I was
assuming circulating water inside it. Sorry about omitting that
critical point :-(

>Dumping all those watts in the environment, absent water cooling, is more
>of a problem. I'd rather not have several hundred watts heating the air in
>my office, thank you.

Or 1,000 of them dumping heat in my machine room ....


Regards,
Nick Maclaren.
Anonymous
a b à CPUs
September 1, 2004 3:08:07 PM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

Nick Maclaren wrote:
>
> But your last remark is correct. It isn't hard to separate GUIs
> into multiple components, separated by message passing (whether
> using thread primitives or not), and those are a doddle to schedule
> on multi-core systems. And that is the way that things are going.
>

I'm not sure that the gui by itself is enough to justify a multi-core
cpu. And there are problems enough in multi-threaded gui, even apart
from deadlocks caused by inexperienced programmer mixing threads and OO
callbacks. Consider mouse events queued before but received after a
resize operation. The mouse coordinates are in the wrong frame of reference
and all wrong. Gui designers design as if the event queue was <= 1 at all
times.

What would more likely to utilize concurrency would be the database like
Longhorm filesystem that MS is supposed to be doing. Except that I don't
think MS has the expertise to do lock-free concurrent programming like that.
If they have, they've been keeping a low profile.

Joe Seigh
Anonymous
a b à CPUs
September 1, 2004 3:21:52 PM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

G wrote:
>
> Every version of Windows based on NT (NT, 2000, XP, Server 2k3,
> Longhorn, etc) has gotten progressively better at utilizing multiple
> CPU's. MS keeps tweaking things to a finer level of granularity. So
> minimally, a single threaded application could still hog 1 CPU, but at
> least the OS underneath will do it's best to make use of the other
> CPU.

A data point. I'm doing nothing much except reading this group and yet
the XP performance monitor shows a queue of 7 or 8 threads ready to run.

I think applications like WORD and Excel already do things like spell-
checking and recalculation in worker threads. I don't find it hard to
believe that a typical Windows box would benefit from 4+ "processors".
Anonymous
a b à CPUs
September 1, 2004 4:27:34 PM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

In article <4135ADBB.722B70F9@xemaps.com>,
Joe Seigh <jseigh_01@xemaps.com> writes:
|>
|> I'm not sure that the gui by itself is enough to justify a multi-core
|> cpu. And there are problems enough in multi-threaded gui, even apart
|> from deadlocks caused by inexperienced programmer mixing threads and OO
|> callbacks. Consider mouse events queued before but received after a
|> resize operation. The mouse coordinates are in the wrong frame of reference
|> and all wrong. Gui designers design as if the event queue was <= 1 at all
|> times.

Take a mouse event in an unrealistically simple design. This is picked
up by the kernel, and passed to the display manager, which converts it
into another form and passes it to the application. That does something
with it, passes a message to the display manager, which calls the kernel
to update the screen. The user does not see any effect until that has
completed.

At best, you have 4 context switches, 2 of which are between user-level
contexts, and it is common for there to be MANY more. Now, consider
that being done as part of drag-and-drop - you want the process to
happen in under 2 milliseconds (certainly under 5), or it will start to
be visible. That can be 1,000+ context switches a second, and some
of those contexts have large working sets, so you are reloading a
lot of cache and TLBs.

One of the advantages of a multi-core system is that you don't need to
switch context just to pass a message if the threads or processes are
on different cores. You just pass the message.


Regards,
Nick Maclaren.
September 1, 2004 5:45:05 PM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

nmm1@cus.cam.ac.uk (Nick Maclaren) wrote in message news:<ch4f7m$sq6$1@pegasus.csx.cam.ac.uk>...
> In article <4135ADBB.722B70F9@xemaps.com>,
> Joe Seigh <jseigh_01@xemaps.com> writes:
> |>
> |> I'm not sure that the gui by itself is enough to justify a multi-core
> |> cpu. And there are problems enough in multi-threaded gui, even apart
> |> from deadlocks caused by inexperienced programmer mixing threads and OO
> |> callbacks. Consider mouse events queued before but received after a
> |> resize operation. The mouse coordinates are in the wrong frame of reference
> |> and all wrong. Gui designers design as if the event queue was <= 1 at all
> |> times.
>
> Take a mouse event in an unrealistically simple design. This is picked
> up by the kernel, and passed to the display manager, which converts it
> into another form and passes it to the application. That does something
> with it, passes a message to the display manager, which calls the kernel
> to update the screen. The user does not see any effect until that has
> completed.
>
> At best, you have 4 context switches, 2 of which are between user-level
> contexts, and it is common for there to be MANY more. Now, consider
> that being done as part of drag-and-drop - you want the process to
> happen in under 2 milliseconds (certainly under 5), or it will start to
> be visible. That can be 1,000+ context switches a second, and some
> of those contexts have large working sets, so you are reloading a
> lot of cache and TLBs.
>
> One of the advantages of a multi-core system is that you don't need to
> switch context just to pass a message if the threads or processes are
> on different cores. You just pass the message.
>
>
> Regards,
> Nick Maclaren.


Actually I wasn't even thinking about anything remotely as complicated
as that.

What I thought is that since XAML is declarative in nature, that an
"inexperienced programmer mixing threads and OO callbacks" (Joe's
comment) wouldn't really be doing the coding at all. It would be done
(and theoretically optimized) by the implementation that sits behind
it.

With respect to both threaded apps and GUI development, my only point
is that it's one possible benefit of the newer higher level
languages/tools. In fact I seem to remember the exact same case being
made a long time ago for things like the UCSD P-System... Whether it's
true or not I can't say.
Anonymous
a b à CPUs
September 1, 2004 5:52:11 PM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

We are on track for mass shipment of a
billion(that's with a B) transistor die by '08.

We shall all now bow toward Santa Clara.

Moore Rules!!!!


"Raymond" <no@all.net> wrote in message news:XQbZc.44$A63.6@trnddc09...
>
> "Nick Maclaren" <nmm1@cus.cam.ac.uk> wrote in message
> news:ch1fnu$9vv$1@pegasus.csx.cam.ac.uk...
> >
> > In article <g9WYc.3239$OQ6.1732@trnddc09>, "Raymond" <no@all.net>
writes:
> > |> This is just extra publicity for what has already been
> > |> known for months, ie the drive to 65nm is on a fast
> > |> pace, things are looking good, much more straining of
> > |> silicon, better internal power management, etc. The really
> > |> exciting transistor designs will happen at 45nm, using the high-k
> > |> interconnects. Though that's still three years away. And there
> > |> is interesting research going on at 15nm, for the next decade.
> >
> > Oh, really? I did a quick Web search, but couldn't find when
> > the comparable announcement was made for 90 nm. I vaguely
> > remember mid-2001, which was a little matter of 3 years before
> > 90 nm hit the streets in quantity.
>
> If you read exactly what Intel said after they achieved 90nm
> SRAM, they weren't anywhere as rosy as they are now with
> 65nm.
>
> > If my recollection is correct, it isn't looking good at all for
> > 65 nm, as the passive leakage problems are even worse. Mid-2007
> > for mass production isn't what Intel are hoping for (or claiming),
> > but IS what ITRS are predicting ....
> >
> > I shall not be holding my breath for 65 nm; you are welcome to
> > hold yours for it :-)
>
> I am holding my breath! :-)
>
>
Anonymous
a b à CPUs
September 1, 2004 7:34:21 PM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

On 31 Aug 2004 16:23:33 GMT, nmm1@cus.cam.ac.uk (Nick Maclaren) wrote:

>I am pretty sure that Intel could cool the chip, even at that speed.
>A factory-fitted silver heatsink, with high-speed water-cooling to
>a heat exchanger in front of a large and fast fan, bolted into a
>heavy chassis, should do the job.

Indeed, I read awhile ago that someone actually did crank a P4 to 5
GHz with the aid of a custom-build liquid cooling system. Of course,
it was a "because it's there" personal project rather than a
commercial product.

>As a demonstration of virtuosity, it would be excellent. As a
>system to sell in large numbers, perhaps not.

Quite.

--
"Sore wa himitsu desu."
To reply by email, remove
the small snack from address.
Anonymous
a b à CPUs
September 1, 2004 7:35:00 PM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

On Wed, 01 Sep 2004 10:40:00 +0200, =?ISO-8859-1?Q?Jan_Vorbr=FCggen?=
<jvorbrueggen-not@mediasec.de> wrote:

>Dumping all those watts in the environment, absent water cooling, is more
>of a problem. I'd rather not have several hundred watts heating the air in
>my office, thank you.

For me, that would be an advantage: I need the heat anyway; it might
as well be doing useful work on the way. It's the cost of the system
that'd be a problem.

--
"Sore wa himitsu desu."
To reply by email, remove
the small snack from address.
Anonymous
a b à CPUs
September 1, 2004 11:35:36 PM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

On Wed, 01 Sep 2004 06:30:13 GMT, "Raymond" <no@all.net> wrote:

>Beyond
>2 cores, I don't see much benefit adding more cores for desktops, not today,
>and not tomorrow, nothwithstanding a lot more intense use of multi-threading.
>I just don't see how the OS, or any compiler, can possibly deal with the main
>logical
>issues involved in sychronization and concurrency, automagically turning an
>otherwise
>mostly STA program into a multi-threaded one.

We had exactly that argument 15 years ago with regard to parallel
processing on servers and supercomputers.

It won't surprise me in the least if 15 years from now, when the
conversation is about multiple cores in digital watches or whatever,
someone says "we had exactly that argument 15 years ago with regard to
parallel processing on desktops" :) 

--
"Sore wa himitsu desu."
To reply by email, remove
the small snack from address.
Anonymous
a b à CPUs
September 1, 2004 11:50:09 PM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

In article <41362416.1434651@news.eircom.net>,
Russell Wallace <wallacethinmintr@eircom.net> wrote:
>On Wed, 01 Sep 2004 06:30:13 GMT, "Raymond" <no@all.net> wrote:
>
>>Beyond
>>2 cores, I don't see much benefit adding more cores for desktops, not today,
>>and not tomorrow, nothwithstanding a lot more intense use of multi-threading.
>>I just don't see how the OS, or any compiler, can possibly deal with the main
>>logical
>>issues involved in sychronization and concurrency, automagically turning an
>>otherwise
>>mostly STA program into a multi-threaded one.
>
>We had exactly that argument 15 years ago with regard to parallel
>processing on servers and supercomputers.

And 30 years ago. I wasn't in this game 45 years ago.

>It won't surprise me in the least if 15 years from now, when the
>conversation is about multiple cores in digital watches or whatever,
>someone says "we had exactly that argument 15 years ago with regard to
>parallel processing on desktops" :) 

Nor would it surprise me. Raymond makes one good point, though he
gets it slightly wrong!

There is effectively NO chance of automatic parallelisation working
on serial von Neumann code of the sort we know and, er, love. Not
in the near future, not in my lifetime and not as far as anyone can
predict. Forget it.

This has the consequence that large-scale parallelism is not a viable
general-purpose architecture until and unless we move to a paradigm
that isn't so intractable. There are such paradigms (functional
programming is a LITTLE better, for a start), but none have taken
off as general models. The HPC world is sui generis, and not relevant
in this thread.

So he would be right if he replaced "beyond 2 cores" by "beyond a
small number of cores". At least for the next decade or so.


Regards,
Nick Maclaren.
Anonymous
a b à CPUs
September 2, 2004 12:55:08 AM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

In article <b7eb1fbe.0409011245.52e96dd3@posting.google.com>,
G <gaf1234567890@hotmail.com> wrote:
>
>Actually I wasn't even thinking about anything remotely as complicated
>as that.

Don't ever try to track down a bug in a GUI system, then :-( I was
not joking when I said that was unrealistically simple.

>What I thought is that since XAML is declarative in nature, that an
>"inexperienced programmer mixing threads and OO callbacks" (Joe's
>comment) wouldn't really be doing the coding at all. It would be done
>(and theoretically optimized) by the implementation that sits behind
>it.

Grrk. I don't know XAML, but that sends shivers up my spine. It is
FAR harder to get that sort of thing right than it appears, unless
the language is designed to ensure that such parallelism cannot
create an inconsistency. And VERY few are.

>With respect to both threaded apps and GUI development, my only point
>is that it's one possible benefit of the newer higher level
>languages/tools. In fact I seem to remember the exact same case being
>made a long time ago for things like the UCSD P-System... Whether it's
>true or not I can't say.

It has been claimed more often than I care to think, and I have been
inflicted with such claims since the 1960s. Yes, it is a possible
benefit, but it is rarely delivered. Such languages typically make
one of three errors:

Relying on the user not making an error - not one.

Being so restrictive that they can't be used for real work.

Being so incomprehensible that nobody can understand them.


Regards,
Nick Maclaren.
Anonymous
a b à CPUs
September 2, 2004 10:39:34 AM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

On 1 Sep 2004 19:50:09 GMT, nmm1@cus.cam.ac.uk (Nick Maclaren) wrote:

>There is effectively NO chance of automatic parallelisation working
>on serial von Neumann code of the sort we know and, er, love. Not
>in the near future, not in my lifetime and not as far as anyone can
>predict. Forget it.

At least as far as your typical spaghetti C++ is concerned, yeah, not
going to happen anytime in the near future.

>This has the consequence that large-scale parallelism is not a viable
>general-purpose architecture until and unless we move to a paradigm
>that isn't so intractable.

And yet, by that argument there should be no market for the big
parallel servers and supercomputers; yet there is. The solution is
that for things that need the speed, people just write the parallel
code by hand.

If what's on the desktop when Doom X, Half-Life Y and Unreal Z come
out is a chip with 1024 individually slow cores, then those games will
be written to use 1024-way parallelism, just as weather forecasting
and quantum chemistry programs are today. Ditto for Photoshop, 3D
modelling, movie editing, speech recognition etc. There's certainly no
shortage of parallelism in the problem domains. The reason things like
games don't use parallel code today whereas weather forecasting does
isn't because of any software issue, it's because gamers don't have
the money to buy massively parallel supercomputers whereas
organizations doing weather forecasting do. When that changes, so will
the software.

--
"Sore wa himitsu desu."
To reply by email, remove
the small snack from address.
Anonymous
a b à CPUs
September 2, 2004 1:01:35 PM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

In article <4136bd3e.40649206@news.eircom.net>,
Russell Wallace <wallacethinmintr@eircom.net> wrote:
>
>At least as far as your typical spaghetti C++ is concerned, yeah, not
>going to happen anytime in the near future.

Sigh. You are STILL missing the point. Spaghetti C++ may be about
as bad as it gets, but the SAME applies to the cleanest of Fortran,
if it is using the same programming paradigms. I can't get excited
over factors of 5-10 difference in optimisability, when we are
talking about improvements over decades.

>>This has the consequence that large-scale parallelism is not a viable
>>general-purpose architecture until and unless we move to a paradigm
>>that isn't so intractable.
>
>And yet, by that argument there should be no market for the big
>parallel servers and supercomputers; yet there is. The solution is
>that for things that need the speed, people just write the parallel
>code by hand.

Sigh. Look, I am in that area. If it were only so simple :-(

>If what's on the desktop when Doom X, Half-Life Y and Unreal Z come
>out is a chip with 1024 individually slow cores, then those games will
>be written to use 1024-way parallelism, just as weather forecasting
>and quantum chemistry programs are today. Ditto for Photoshop, 3D
>modelling, movie editing, speech recognition etc. There's certainly no
>shortage of parallelism in the problem domains. The reason things like
>games don't use parallel code today whereas weather forecasting does
>isn't because of any software issue, it's because gamers don't have
>the money to buy massively parallel supercomputers whereas
>organizations doing weather forecasting do. When that changes, so will
>the software.

Oh, yeah. Ha, ha. I have been told that more-or-less continually
since about 1970. Except for the first two thirds of your first
sentence, it is nonsense.

Not merely do people sweat blood to get such parallelism, they
often have to change their algorithms (sometimes to ones that are
less desirable, such as being less accurate), and even then only
SOME problems can be parallelised.


Regards,
Nick Maclaren.
Anonymous
a b à CPUs
September 2, 2004 2:39:39 PM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

spinlock wrote:

> We are on track for mass shipment of a billion (that's with a B)
> transistor die by '08.

Who's "we" ?

I have read that there will be ~1.7e9 transistors in Montecito.
Cache (2*1 MB L2 + 2*12 MB L3) probably accounts for ~90% of the
transistor count. Montecito is expected next year.

At 90 nm, please correct me if I am wrong, the chip would occupy
between 650 mm^2 and 750 mm^2. Is that possible?

> We shall all now bow toward Santa Clara.

Whatever floats your boat.

--
Regards, Grumble
Anonymous
a b à CPUs
September 2, 2004 2:39:40 PM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

In article <ch6m8b$grg$1@news-rocq.inria.fr>, Grumble <a@b.c> wrote:
>spinlock wrote:
>
>> We are on track for mass shipment of a billion (that's with a B)
>> transistor die by '08.
>
>Who's "we" ?

A good question. But note that "by '08" includes "in 2005".

>I have read that there will be ~1.7e9 transistors in Montecito.
>Cache (2*1 MB L2 + 2*12 MB L3) probably accounts for ~90% of the
>transistor count. Montecito is expected next year.

By whom is it expected? And how is it expected to appear? Yes,
someone will wave a chip at IDF and claim that it is a Montecito,
but are you expecting it to be available for internal testing,
to all OEMS, to special customers, or on the open market?


Regards,
Nick Maclaren.
Anonymous
a b à CPUs
September 2, 2004 2:39:41 PM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

Nick Maclaren wrote:
>>Montecito is expected next year.
>
> By whom is it expected? And how is it expected to appear? Yes,
> someone will wave a chip at IDF and claim that it is a Montecito,
> but are you expecting it to be available for internal testing,
> to all OEMS, to special customers, or on the open market?

By intel and everyone who has been believing their repeated, unwavering
claims that mid-2005 will see commercial revenue shipments of Montecito.
Based on all the past releases in IPF, I expect a "launch" in June '05
and customers will have systems running in their environments around
August. There should be Montecito demonstrations at this coming IDF.
There were wafers shown at the last IDF. If my anticipated schedule is
correct, OEMs will have test chips soon.

Alex
--
My words are my own. They represent no other; they belong to no other.
Don't read anything into them or you may be required to compensate me
for violation of copyright. (I do not speak for my employer.)
Anonymous
a b à CPUs
September 2, 2004 3:28:50 PM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

On Wed, 01 Sep 2004 13:52:11 -0700, spinlock wrote:

> We are on track for mass shipment of a billion(that's with a B) transistor
> die by '08.
>
> We shall all now bow toward Santa Clara.
>
> Moore Rules!!!!

Ummm... the 'lock' fell off your 'spin'
Anonymous
a b à CPUs
September 2, 2004 4:16:37 PM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

Nick Maclaren wrote:

> Grumble wrote:
>
>> spinlock wrote:
>>
>>> We are on track for mass shipment of a billion (that's with a B)
>>> transistor die by '08.
>>
>> Who's "we" ?
>
> A good question. But note that "by '08" includes "in 2005".

I took "by 2008" to mean "sometime in 2008". Otherwise he would have
said "by 2005" or "by 2006", don't you think?

>> I have read that there will be ~1.7e9 transistors in Montecito.
>> Cache (2*1 MB L2 + 2*12 MB L3) probably accounts for ~90% of the
>> transistor count. Montecito is expected next year.
>
> By whom is it expected? And how is it expected to appear? Yes,
> someone will wave a chip at IDF and claim that it is a Montecito,
> but are you expecting it to be available for internal testing,
> to all OEMS, to special customers, or on the open market?

In November 2003, Intel's roadmap claimed Montecito would appear in
2005. 6 months later, Otellini mentioned 2005 again. In June 2004, Intel
supposedly showcased Montecito dies, and claimed that testing had begun.

http://www.theinquirer.net/?article=15917
http://www.xbitlabs.com/news/cpu/display/20040219125800...
http://www.xbitlabs.com/news/cpu/display/20040619180753...

Perhaps Intel is being overoptimistic, but, as far as I understand, they
claim Montecito will be ready in 2005.

--
Regards, Grumble
Anonymous
a b à CPUs
September 2, 2004 4:16:38 PM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

In article <ch6s4q$ict$1@news-rocq.inria.fr>, Grumble <a@b.c> writes:
|> >
|> > By whom is it expected? And how is it expected to appear? Yes,
|> > someone will wave a chip at IDF and claim that it is a Montecito,
|> > but are you expecting it to be available for internal testing,
|> > to all OEMS, to special customers, or on the open market?
|>
|> In November 2003, Intel's roadmap claimed Montecito would appear in
|> 2005. 6 months later, Otellini mentioned 2005 again. In June 2004, Intel
|> supposedly showcased Montecito dies, and claimed that testing had begun.
|>
|> Perhaps Intel is being overoptimistic, but, as far as I understand, they
|> claim Montecito will be ready in 2005.

I am aware of that. Given that Intel failed to reduce the power
going to 90 nm for the Pentium 4, that implies it will need 200
watts. Given that HP have already produced a dual-CPU package,
they will have boards rated for that. Just how many other vendors
will have?

Note that Intel will lose more face if they produce the Montecito
and OEMs respond by dropping their IA64 lines than if they make
it available only on request to specially favoured OEMs.


Regards,
Nick Maclaren.
Anonymous
a b à CPUs
September 2, 2004 6:15:52 PM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

On 2 Sep 2004 09:01:35 GMT, nmm1@cus.cam.ac.uk (Nick Maclaren) wrote:

>Sigh. You are STILL missing the point. Spaghetti C++ may be about
>as bad as it gets, but the SAME applies to the cleanest of Fortran,
>if it is using the same programming paradigms. I can't get excited
>over factors of 5-10 difference in optimisability, when we are
>talking about improvements over decades.

"Cleanest of Fortran" usually means vector-style code, which is a
reasonable target for autoparallelization. I'll grant you if you took
a pile of spaghetti C++ and translated line-for-line to Fortran, the
result wouldn't autoparallelize with near-future technology any more
than the original did.

>>And yet, by that argument there should be no market for the big
>>parallel servers and supercomputers; yet there is. The solution is
>>that for things that need the speed, people just write the parallel
>>code by hand.
>
>Sigh. Look, I am in that area. If it were only so simple :-(

I didn't claim it was simple. I claimed that, even though it's
complicated, it still happens.

>>If what's on the desktop when Doom X, Half-Life Y and Unreal Z come
>>out is a chip with 1024 individually slow cores, then those games will
>>be written to use 1024-way parallelism, just as weather forecasting
>>and quantum chemistry programs are today. Ditto for Photoshop, 3D
>>modelling, movie editing, speech recognition etc. There's certainly no
>>shortage of parallelism in the problem domains. The reason things like
>>games don't use parallel code today whereas weather forecasting does
>>isn't because of any software issue, it's because gamers don't have
>>the money to buy massively parallel supercomputers whereas
>>organizations doing weather forecasting do. When that changes, so will
>>the software.
>
>Oh, yeah. Ha, ha. I have been told that more-or-less continually
>since about 1970. Except for the first two thirds of your first
>sentence, it is nonsense.

So you claim weather forecasting and quantum chemistry _don't_ use
parallel processing today? Or that gamers would be buying 1024-CPU
machines today if Id would only get around to shipping parallel code?

>Not merely do people sweat blood to get such parallelism, they
>often have to change their algorithms (sometimes to ones that are
>less desirable, such as being less accurate), and even then only
>SOME problems can be parallelised.

I didn't claim sweating blood and changing algorithms weren't
required. However, I'm not aware of any CPU-intensive problems of
practical importance that _can't_ be parallelized; do you have any
examples of such?

--
"Sore wa himitsu desu."
To reply by email, remove
the small snack from address.
Anonymous
a b à CPUs
September 2, 2004 6:39:45 PM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

In article <4137299e.68397045@news.eircom.net>,
wallacethinmintr@eircom.net (Russell Wallace) writes:
|>
|> "Cleanest of Fortran" usually means vector-style code, which is a
|> reasonable target for autoparallelization. ...

Not in my world, it doesn't. There are lots of other extremely
clean codes.

|> >Oh, yeah. Ha, ha. I have been told that more-or-less continually
|> >since about 1970. Except for the first two thirds of your first
|> >sentence, it is nonsense.
|>
|> So you claim weather forecasting and quantum chemistry _don't_ use
|> parallel processing today? Or that gamers would be buying 1024-CPU
|> machines today if Id would only get around to shipping parallel code?

I am claiming that a significant proportion of the programs don't.
In a great many cases, people have simply given up attempting the
analyses, and have moved to less satisfactory ones that can be
parallelised. In some cases, they have abandoned whole lines of
reserach! Your statement was that the existing programs would
be parallelised:

then those games will be written to use 1024-way parallelism,
just as weather forecasting and quantum chemistry programs are
today

|> >Not merely do people sweat blood to get such parallelism, they
|> >often have to change their algorithms (sometimes to ones that are
|> >less desirable, such as being less accurate), and even then only
|> >SOME problems can be parallelised.
|>
|> I didn't claim sweating blood and changing algorithms weren't
|> required. However, I'm not aware of any CPU-intensive problems of
|> practical importance that _can't_ be parallelized; do you have any
|> examples of such?

Yes. Look at ODEs for one example that is very hard to parallelise.
Anything involving sorting is also hard to parallelise, as are many
graph-theoretic algorithms. Ones that are completely hopeless are
rarer, but exist - take a look at the "Spectral Test" in Knuth for
a possible candidate.

The characteristic of the most common class of unparallelisable
algorithm is that they are iterative, each step is small (i.e.
effectively scalar), yet it makes global changes (and where the
cost of that is very small). This means that steps are never
independent, and are therefore serialised.

What I can't say is how many CPU-intensive problems of practical
importance are intrinsically unparallelisable - i.e. they CAN'T
be converted to a parallelisable form by changing the algorithms.
But that is not what I claimed.


Regards,
Nick Maclaren.
Anonymous
a b à CPUs
September 2, 2004 6:48:38 PM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

On 2 Sep 2004 14:39:45 GMT, nmm1@cus.cam.ac.uk (Nick Maclaren) wrote:

>Your statement was that the existing programs would
>be parallelised:
>
> then those games will be written to use 1024-way parallelism,
> just as weather forecasting and quantum chemistry programs are
> today

Oh! I think we've been talking at cross purposes then.

I'm not at all talking about taking existing code and tweaking it to
run in parallel. I agree that isn't always feasible. I'm talking about
taking an existing problem domain and writing new code to solve it
with parallel algorithms.

>What I can't say is how many CPU-intensive problems of practical
>importance are intrinsically unparallelisable - i.e. they CAN'T
>be converted to a parallelisable form by changing the algorithms.
>But that is not what I claimed.

Okay, I'm specifically talking about using different algorithms where
necessary.

--
"Sore wa himitsu desu."
To reply by email, remove
the small snack from address.
Anonymous
a b à CPUs
September 2, 2004 8:34:08 PM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

Russell Wallace wrote:

> On 1 Sep 2004 19:50:09 GMT, nmm1@cus.cam.ac.uk (Nick Maclaren) wrote:
>
>
>>There is effectively NO chance of automatic parallelisation working
>>on serial von Neumann code of the sort we know and, er, love. Not
>>in the near future, not in my lifetime and not as far as anyone can
>>predict. Forget it.
>
>
> At least as far as your typical spaghetti C++ is concerned, yeah, not
> going to happen anytime in the near future.
>

The statement is wrong in any case. C can be translated to hardware
(which is defacto parallelisim) by "constraints", i.e., refusing to
translate its worst features (look up system C, C to hardware and
similar). Other languages can do it without constraints. Finally,
any code, no matter how bad, could be so translated by executing it
(simulating it), and then translating what it does dynamically and
not statically. This simulation can then give the programmer a report
of what was not executed, and the programmer modifies the test cases
until all code has been so translated.

>
>>This has the consequence that large-scale parallelism is not a viable
>>general-purpose architecture until and unless we move to a paradigm
>>that isn't so intractable.
>
>
> And yet, by that argument there should be no market for the big
> parallel servers and supercomputers; yet there is. The solution is
> that for things that need the speed, people just write the parallel
> code by hand.
>
> If what's on the desktop when Doom X, Half-Life Y and Unreal Z come
> out is a chip with 1024 individually slow cores, then those games will
> be written to use 1024-way parallelism, just as weather forecasting
> and quantum chemistry programs are today. Ditto for Photoshop, 3D
> modelling, movie editing, speech recognition etc. There's certainly no
> shortage of parallelism in the problem domains. The reason things like
> games don't use parallel code today whereas weather forecasting does
> isn't because of any software issue, it's because gamers don't have
> the money to buy massively parallel supercomputers whereas
> organizations doing weather forecasting do. When that changes, so will
> the software.
>


--
Samiam is Scott A. Moore

Personal web site: http:/www.moorecad.com/scott
My electronics engineering consulting site: http://www.moorecad.com
ISO 7185 Standard Pascal web site: http://www.moorecad.com/standardpascal
Classic Basic Games web site: http://www.moorecad.com/classicbasic
The IP Pascal web site, a high performance, highly portable ISO 7185 Pascal
compiler system: http://www.moorecad.com/ippas

Being right is more powerfull than large corporations or governments.
The right argument may not be pervasive, but the facts eventually are.
Anonymous
a b à CPUs
September 3, 2004 12:14:16 AM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

> Precisely. As far as the easiness of doing it is concerned, the
> question to ask is how the proportion of systems/money/effort/etc.
> spent on large scale parallel applications is varying over time,
> relative to that on all performance-limited applications.

Getting back to the issue of multiprocessors for "desktops" or even
laptops: I agree that parallelizing Emacs is going to be excrutiatingly
painful so I don't see it happening any time soon. But that's not really
the question.

I think that as SMP and SMT progresses on those machines (first as
bi-processors), you'll see more applications use *very* coarse grain
parallelism. It won't make much difference performancewise: the extra
processor will be used for unrelated tasks like "background foo" which isn't
done now because it would slow things down too much on a uniprocessor.
Existing things mostly won't be parallelized, but the extra CPU will be used
for new things of dubious value.

Your second CPU will be mostly idle, of course, but so is the first CPU
anyway ;-)



Stefan
Anonymous
a b à CPUs
September 3, 2004 12:27:34 AM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

In article <jwvvfewh1wk.fsf-monnier+comp.arch@gnu.org>,
Stefan Monnier <monnier@iro.umontreal.ca> wrote:
>
>I think that as SMP and SMT progresses on those machines (first as
>bi-processors), you'll see more applications use *very* coarse grain
>parallelism. It won't make much difference performancewise: the extra
>processor will be used for unrelated tasks like "background foo" which isn't
>done now because it would slow things down too much on a uniprocessor.
>Existing things mostly won't be parallelized, but the extra CPU will be used
>for new things of dubious value.

I regret to say that I agree with you :-(


Regards,
Nick Maclaren.
Anonymous
a b à CPUs
September 3, 2004 12:47:32 AM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

Stefan Monnier wrote:

>
> Getting back to the issue of multiprocessors for "desktops" or even
> laptops: I agree that parallelizing Emacs is going to be excrutiatingly
> painful so I don't see it happening any time soon. But that's not really
> the question.
>
> I think that as SMP and SMT progresses on those machines (first as
> bi-processors), you'll see more applications use *very* coarse grain
> parallelism. It won't make much difference performancewise: the extra
> processor will be used for unrelated tasks like "background foo" which isn't
> done now because it would slow things down too much on a uniprocessor.
> Existing things mostly won't be parallelized, but the extra CPU will be used
> for new things of dubious value.
>
> Your second CPU will be mostly idle, of course, but so is the first CPU
> anyway ;-)
>

I sometimes think: no one experienced the microprocessor revolution. Or
perhaps: everyone has adjusted his recollection so that he thinks he saw
things much more clearly than he did. Or perhaps: the world is divided
between those whose world-view was built before the revolution and are
never going to acknowledge exactly what they missed, anyway, and those
whose world-view was built too late to have enough perspective to see
just how badly everybody missed it.

The world of programming is about to change in ways that no big-iron or
cluster megaspending program ever could accomplish. I'm tempted to say:
get used to it, but it would be socially unacceptable and we're going to
have a repeat of what happened with the microprocessor revolution:
almost no one is going to put his hand to his forehead and say, "I
should have seen that coming, but I didn't."

RM
Anonymous
a b à CPUs
September 3, 2004 2:45:03 AM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

Robert Myers wrote:

[SNIP]

> The world of programming is about to change in ways that no big-iron or
> cluster megaspending program ever could accomplish. I'm tempted to say:
> get used to it, but it would be socially unacceptable and we're going to
> have a repeat of what happened with the microprocessor revolution:
> almost no one is going to put his hand to his forehead and say, "I
> should have seen that coming, but I didn't."

More CPUs per chunk of memory ?

Back in 1990 as a PFY at INMOS I asked about why they took the
approach they did (OCCAM/CSP/Transputers). I was given an explanation
that included trends in heat dissipation, memory latency, clock rates,
leakage etc. By and large it's panning out as predicted, although the
timescales have proven to be a little longer (kudos to the guys doing
the chip design and silicon physics).


Cheers,
Rupert
Anonymous
a b à CPUs
September 3, 2004 2:45:04 AM

Archived from groups: comp.arch,comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

Rupert Pigott wrote:
> Robert Myers wrote:
>
> [SNIP]
>
>> The world of programming is about to change in ways that no big-iron
>> or cluster megaspending program ever could accomplish. I'm tempted to
>> say: get used to it, but it would be socially unacceptable and we're
>> going to have a repeat of what happened with the microprocessor
>> revolution: almost no one is going to put his hand to his forehead and
>> say, "I should have seen that coming, but I didn't."
>
>
> More CPUs per chunk of memory ?
>
> Back in 1990 as a PFY at INMOS I asked about why they took the
> approach they did (OCCAM/CSP/Transputers). I was given an explanation
> that included trends in heat dissipation, memory latency, clock rates,
> leakage etc. By and large it's panning out as predicted, although the
> timescales have proven to be a little longer (kudos to the guys doing
> the chip design and silicon physics).
>

Yes, indeed.

That's a powerful insight, but I would characterize it as the hardware
driver for what I see as a more profound revolution in software. Who
knows, maybe the day of Occam is at hand. :-).

The smallest unit that anyone will ever program for non-embedded
applications will support I hesitate to guess how many execution pipes,
but certainly more than one. Single-pipe programming, using tools
appropriate for single-pipe programming, will come to seem just as
natural as doing physics without vectors and tensors.

The fact that this reality is finally percolating into the lowly but
ubiquitous PC is what I'm counting on for magic.

RM
!