Sign in with
Sign up | Sign in
Your question

Intel server CPU roadmap updated

Tags:
  • CPUs
  • Servers
  • Intel
Last response: in CPUs
Share
April 1, 2006 12:54:10 PM

http://www.dailytech.com/article.aspx?newsid=833
In Q3 we will have the first Yonah based server chips.
Intel will continue production on the Itanium with a max of 24 MB cache 8O

More about : intel server cpu roadmap updated

April 1, 2006 2:04:43 PM

err, What (Core Duo / Pentium M style) core was the Intel Xeon LV released March 14th, 2006, based on ?

Saw it before, the new Xeon Processor DP (LV) 5063 is going to rock. (Assuming based on Core Duo / P-M core, if not Woodcrest).

If it is Woodcrest based, it'll rock even more. :) 
April 1, 2006 2:07:44 PM

Quote:
current Xeon processors are bad on the Paxville core

That says alot.
Related resources
April 1, 2006 2:50:18 PM

Quote:
In Q3 we will have the first Yonah based server chips.
Intel will continue production on the Itanium with a max of 24 MB cache


Who cares about Xeons and Itanics?
They're no match to Opterons. 8)
April 1, 2006 3:12:06 PM

For now!
Laugh while you still can fanboy.

Almost forgot about Sossaman core that was Yonah based. I stand corrected.
These ones will use woodcrest.!!
April 1, 2006 3:45:43 PM

Conroe and woodcrest are beated already with K8L.

For now!
Keep crying fanboy. 8)
April 1, 2006 4:05:14 PM

Quote:


Who cares about Xeons and Itanics?
They're no match to Opterons. 8)


[code:1:ad5223bd66]

Actually a 4-way IA-64 will have greater performance than my 4-way Opteron 270. (Yes, Even with both at 2 GHz).

IA-64 will also scale to 512/1024 processor designs.

Opteron K8 cores are only 3-issue cores, IA-64 cores are 11-issue and use an even shorter pipeline (better for performance than long pipeline) than the Opteron K8.

Conroe, Merom, Woodcrest, Kentville will all be using 4-issue cores.

More IA-64 cores will fit on an equal amount of silicon than Opteron (K8) cores... IA-64 it just needs more L1/L2/L3 cache to scale better IPC wise on existing applications. Data caches can remain the same size however, but may aswell scale both.

Transistor for transistor the IA-64 cores FPU is significantly more powerful, even at a fraction of the clock speed, compared to my Opteron K8 FPU.

It is known, that K8L is still only going to be a 3-issue, per core, design, it'll just permit 4 cores at 65nm, more cache, etc, and scale to 45nm if AMD want.

Considering how many IA-64 cores will fit in a 65nm and 45nm design.... well it should speak for itself by now.

Most IA-64 processors you're thinking of are only 180nm or 130nm, which as such have 1/4 to 1/16 the number of transistors 'at equal production cost'. When they move to 65nm with 2x - 8x the number of cores per processor what do you expect to happen ?

Intel want 100's of cores per processor, IA-64 was designed to do just that, only it is 5 years ahead of its time.

AMD now have IA-64 engineers.

HP have ties to both companies.

Surely you see what this means.

[/code:1:ad5223bd66]

Quote:
You can't bullzhit me!


Yeah, once something is 100% bullzhit is it impossible to add anymore without increasing the size / mass of forementioned bullzhit. :p 

Do bear in mind my home machine is an Opteron 270 with 4 cores, and I am ready to admit IA-64 is going to offer higher performing processor cores than K8L, just some people go into denial as they can't handle the truth. :p 


(Remember to view at 1:1, not zoomed out to one screen).

Sure AMD K10 design may change all that, by incorperating both x64 (AMD64) and IA-64 (AMD varient thereof) within on 65nm, 45nm or 32nm processor. (Perhaps I have said to much for people to handle with this one :oops:  )

AMD shared (x86_64, or just x64) AMD64 to Intel, so they now have Intel EM64T (basically the same instruction extentions).

Intel are now 'sharing' IA-64 with AMD. (AMD have HP/Intel IA-64 engineers over there right now).

So will you say AMD suck when they have both x64 (AMD64) and IA-64 processor cores within one CPU, because it was based on Itanium ?

They are getting a slice of a US$10 billion dollar investment, and it is costing them little in comparison to get in on the multiple cores cores per processor action.

AMD64 to get Intel EM64T, and in exchange they get access to Intel IA-64 and 'in on' a significant investment, possibly the largest IT CPU related investment in the history of humankind. At least for the next 50-100 years.


Tip of the day: When rettihSlluB click
April 1, 2006 5:53:32 PM

Quote:
Saw it before, the new Xeon Processor DP (LV) 5063 is going to rock. (Assuming based on Core Duo / P-M core, if not Woodcrest). Saw it before, the new Xeon Processor DP (LV) 5063 is going to rock. (Assuming based on Core Duo / P-M core, if not Woodcrest).

I've already commented to DailyTech, but the 5063 is mislabelled. Based on the 2x2MB cache configuration and the 1066MHz FSB it must be Dempsey based and not a Sossaman. Since Dempsey can't reach Sossaman's power levels, it's incorrectly label as a LV part. The 5063 is actually a Xeon DP MV.

http://www.theinquirer.net/?article=26070

This is an older roadmap a and so it doesn't list the 3.73GHz 5080 which bumps down the pricing an cancels the 2.5GHz 5020. The DailyTech article keeps being updated so occasionally they terminate the chart at the 5050. In any case, The Inquirer does list the 5063 as an MV part.
April 1, 2006 6:34:20 PM

All the 50xx models are Dempsey cores so they would be Netburst based. Woodcrest models will debut as the 51xx.

Talking to DailyTech, they've now updated the charts and included Bin categories which is generally not disclosed although easy to guess. The 5063 is actually a Bin-2 3.46GHz 5070 but running at 3.2GHz to save heat and power allowing a TDP of 95W. It will therefore be a Xeon MV. It probably makes it a great overclocker, but there's really no point in buying a Xeon for that.

Tulsa which is set to replace the Paxville MP in the 4-way market will also be Netburst based. It'll be a 65nm part, but the major advantage is that it'll have a 16MB shared L3 cache. This will eliminate cache coherency traffic between the two cores on the same processor, meaning only 4 caches need to be kept coherent instead of 8. This will be a great help considering the 4 processors will only have 2 800MHz FSBs. Despite the massive cache, the power levels shouldn't be that much higher than Dempsey because Intel will be using as many power oriented transistors as possible rather than speed oriented ones. The effect on performance should be minimal though because they've only be implemented on sections that don't need the extra speed anyways. The focus is the L3 cache which uses sleep transistors similar to Yonah, allowing it to power itself down when sections aren't in use. Tulsa will probably arrive in Q3 along with the Core architecture.

The 4-way segment won't transistion to Core until 2007 when the Cloverton MP is launched. We know Cloverton will simply be 2 Woodcrests put together, but I'm betting that the Cloverton MP will get the Tulsa treatment and receive a large shared L3 cache between the 4 cores. This is quite likely considering server chips often benefit from larger caches more than raw clock speeds. Tigerton has also been mentioned and it'll probably be Cloverton MP's replacement rather than an older alternate name for it. It's unclear whether Tigerton will receive some form of the delayed CSI or OMC, but Tigerton will probably be a true quad core solution.
April 1, 2006 6:50:18 PM

So which core at the Dual-Core Intel® Xeon® processor LV using:

See:
http://www.intel.com/products/processor/xeon/index.htm
then click:
View specification chart
then:
Dual-Core Intel® Xeon® processor LV

I just want confirmation that's all. I suspect they use the same core as the Core Duo 1st gen currently does, (vs 2nd gen which is going to be Merom, Conroe, Woodcrest,etc based).

They don't even have part numbers for them on the Intel website, and they were announced March 14, 2006.
April 1, 2006 7:25:33 PM

Quote:
Code:


Actually a 4-way IA-64 will have greater performance than my 4-way Opteron 270. (Yes, Even with both at 2 GHz).

IA-64 will also scale to 512/1024 processor designs.

Opteron K8 cores are only 3-issue cores, IA-64 cores are 11-issue and use an even shorter pipeline (better for performance than long pipeline) than the Opteron K8.

Conroe, Merom, Woodcrest, Kentville will all be using 4-issue cores.

More IA-64 cores will fit on an equal amount of silicon than Opteron (K8) cores... IA-64 it just needs more L1/L2/L3 cache to scale better IPC wise on existing applications. Data caches can remain the same size however, but may aswell scale both.

Transistor for transistor the IA-64 cores FPU is significantly more powerful, even at a fraction of the clock speed, compared to my Opteron K8 FPU.

It is known, that K8L is still only going to be a 3-issue, per core, design, it'll just permit 4 cores at 65nm, more cache, etc, and scale to 45nm if AMD want.

Considering how many IA-64 cores will fit in a 65nm and 45nm design.... well it should speak for itself by now.

Most IA-64 processors you're thinking of are only 180nm or 130nm, which as such have 1/4 to 1/16 the number of transistors 'at equal production cost'. When they move to 65nm with 2x - 8x the number of cores per processor what do you expect to happen ?

Intel want 100's of cores per processor, IA-64 was designed to do just that, only it is 5 years ahead of its time.

AMD now have IA-64 engineers.

HP have ties to both companies.

Surely you see what this means.




rettihSlluB wrote:
You can't bullzhit me!


Yeah, once something is 100% bullzhit is it impossible to add anymore without increasing the size / mass of forementioned bullzhit.

Do bear in mind my home machine is an Opteron 270 with 4 cores, and I am ready to admit IA-64 is going to offer higher performing processor cores than K8L, just some people go into denial as they can't handle the truth.


Please, have a look at this and this.

If you still think Itanium leads the benchmarks, then, that's another story... :roll:
April 1, 2006 7:28:53 PM

The current dual core Xeon LV uses the Sossman core which is basically Yonah but validated for server use and with support for dual processors. The 2GHz version has a TDP of 31W. I'm not sure if the 1.67GHz version has the same TDP or if it's a LV LV with a 15W TDP.

The primary difference is that Sossaman supports 36-bit physical memory addressing allowing it to handle more RAM. Yonah is limited to 32-bit memory addressing, but I believe that's actually due to the power-saving FSB on notebooks removing the 4-bits because it's not worthwhile rather than the core not supporting it. Sossaman and Yonah both support VT, but currently VT isn't enabled on Yonah's while Sossaman probably ships enabled. Yonah is supposed to release a new stepping soon with VT enabled, and it'll be coupled with a BIOS update which should also allow VT for current Yonah's.
April 2, 2006 7:52:03 AM

Thanks ltcommander_data 8)

Quote:


Please, have a look at this and this.

If you still think Itanium leads the benchmarks, then, that's another story... :roll:


Lets take a closer look at those scores then.

Itanium - clocked under 60% as high as the Opteron system above it, gives similar performance.

Itanium - At 180nm or 130nm is on par with systems based on 90nm and 65nm. (tranistors take about one quarter the die space).

In the 2nd link, the mid range 180 nm or 130 nm Itanium is offering +48% more FPU performance per clock, than the highest end Opteron (2.8 GHz) based on 90nm.

Wow, the Itanium clearly has no potential on 90,65,45,32 nm (and as such at similar clock speeds to those die sizes) as you clearly pointed out in your post, while disregarding the die size / clock speed comments above. :roll:

Then bear in size of the IA-64 die, compared to the K8 die, you can cram a hell of a lot more IA-64 dies in at 45nm and beyond than K8 dies. 8O

Now using those same links you provided, the following information is available on the same two pages:

Sun Microsystems: Sun Fire X4100
http://www.spec.org/osg/cpu2000/results/res2005q4/cpu20...

Hewlett-Packard Company: HP Integrity rx2620-2 (1.6GHz/6MB, Itanium 2)
http://www.spec.org/osg/cpu2000/results/res2005q1/cpu20...

Sun Microsystems: Sun Fire V40z
http://www.spec.org/osg/cpu2000/results/res2005q4/cpu20...

SGI: SGI Altix 3000 (1500MHz, Itanium 2)
http://www.spec.org/osg/cpu2000/results/res2005q4/cpu20...

Suddenly, using links on the the two pages you provided, the IA-64 architecture looks pretty damn good. 8O
April 3, 2006 6:50:30 PM

Quote:
Lets take a closer look at those scores then.

Itanium - clocked under 60% as high as the Opteron system above it, gives similar performance.

Itanium - At 180nm or 130nm is on par with systems based on 90nm and 65nm. (tranistors take about one quarter the die space).

In the 2nd link, the mid range 180 nm or 130 nm Itanium is offering +48% more FPU performance per clock, than the highest end Opteron (2.8 GHz) based on 90nm.

Wow, the Itanium clearly has no potential on 90,65,45,32 nm (and as such at similar clock speeds to those die sizes) as you clearly pointed out in your post, while disregarding the die size / clock speed comments above.


In conclusion, Opterons still beat those Itanics because they can't clock higher than 2.0GHz. 8)
April 3, 2006 6:51:16 PM

Quote:
Lets take a closer look at those scores then.

Itanium - clocked under 60% as high as the Opteron system above it, gives similar performance.

Itanium - At 180nm or 130nm is on par with systems based on 90nm and 65nm. (tranistors take about one quarter the die space).

In the 2nd link, the mid range 180 nm or 130 nm Itanium is offering +48% more FPU performance per clock, than the highest end Opteron (2.8 GHz) based on 90nm.

Wow, the Itanium clearly has no potential on 90,65,45,32 nm (and as such at similar clock speeds to those die sizes) as you clearly pointed out in your post, while disregarding the die size / clock speed comments above.


In conclusion, Opterons still beat those Itanics because they can't clock higher than 2.0GHz. 8)
April 15, 2006 10:07:10 AM

Quote:

Suddenly, using links on the the two pages you provided, the IA-64 architecture looks pretty damn good. 8O


I don't even know why people talk or waste their time with the Itanium. General-purpose VLIW architectures should die, die, die. Ever try to program for the PS2 VU units? Great if you write assembly, but terrible results using normal C/C++ compilers.

For specialized use - CISC, RISC, VLIW each have their benefits.
April 15, 2006 2:59:03 PM

Quote:
Who cares about Xeons and Itanics?
They're no match to Opterons.

Second that. :!:
April 15, 2006 5:04:25 PM

Quote:

Suddenly, using links on the the two pages you provided, the IA-64 architecture looks pretty damn good. 8O


I don't even know why people talk or waste their time with the Itanium. General-purpose VLIW architectures should die, die, die. Ever try to program for the PS2 VU units? Great if you write assembly, but terrible results using normal C/C++ compilers.

For specialized use - CISC, RISC, VLIW each have their benefits.

Only if the numbers agreed with you, like it or not IA-64 is still ridiculas at FP operations and as compilers for EPIC get better general code execution as well.
April 15, 2006 5:53:01 PM

Quote:
Lets take a closer look at those scores then.

Itanium - clocked under 60% as high as the Opteron system above it, gives similar performance.

Itanium - At 180nm or 130nm is on par with systems based on 90nm and 65nm. (tranistors take about one quarter the die space).

In the 2nd link, the mid range 180 nm or 130 nm Itanium is offering +48% more FPU performance per clock, than the highest end Opteron (2.8 GHz) based on 90nm.

Wow, the Itanium clearly has no potential on 90,65,45,32 nm (and as such at similar clock speeds to those die sizes) as you clearly pointed out in your post, while disregarding the die size / clock speed comments above.


In conclusion, Opterons still beat those Itanics because they can't clock higher than 2.0GHz. 8)

it's sad how you just disregard all data presented to you and spout out some ignorant statement like the one above.

AMD is good, but that doesn't mean intel makes only shit. be a little more open minded.
April 15, 2006 6:32:03 PM

Quote:
Quote:
In conclusion, Opterons still beat those Itanics because they can't clock higher than 2.0GHz. 8)


Try fitting 24mb of cache onto your beloved Opteron. Your obsession with clockspeed is the same obsession Intel had with the P4.
April 15, 2006 6:51:55 PM

Quote:

Only if the numbers agreed with you, like it or not IA-64 is still ridiculas at FP operations and as compilers for EPIC get better general code execution as well.


Then why are the Alpha and MIPS slowly dying off?

You hardware people forget what drives technology - applications + software. The transistion to optimized VLIW software is a hard one and Intel better provide kick-ass developer tools (which they probably do for high-end scientific computing). I'm not convinced VLIW architectures are suitable for dynamic + interpreted languages - the way software should be written in the future. Intel is moving hardware complexity into software. Thanks - just make the hardware a tenth the price and ten times faster and maybe I'll bite.

For non-scientific work, Itanium 2 is turkey. Frankly, it's been a struggle to get developers off 32-bit OSes, x86 CPUs and that damned BIOS - VLIW CPUs are too big of a change for the non-high-end-scientific market. I was in involved in a port of a large app to Itanium back in 2002. The end result was the app ran at a THIRD of the speed - no code changes except 64-bit ness and compliation with Intel's wickedly optimized C/C++ compilers. Other than rewriting the whole app, we had to depend on the compiler for all the optimizations.

"ridiculas at FP operations" is great, but "mediocre at integer operations" is not a good thing.
April 15, 2006 7:17:50 PM

Quote:

Only if the numbers agreed with you, like it or not IA-64 is still ridiculas at FP operations and as compilers for EPIC get better general code execution as well.


Then why are the Alpha and MIPS slowly dying off?

You hardware people forget what drives technology - applications + software. The transistion to optimized VLIW software is a hard one and Intel better provide kick-ass developer tools (which they probably do for high-end scientific computing). I'm not convinced VLIW architectures are suitable for dynamic + interpreted languages - the way software should be written in the future. Intel is moving hardware complexity into software. Thanks - just make the hardware a tenth the price and ten times faster and maybe I'll bite.

For non-scientific work, Itanium 2 is turkey. Frankly, it's been a struggle to get developers off 32-bit OSes, x86 CPUs and that damned BIOS - VLIW CPUs are too big of a change for the non-high-end-scientific market. I was in involved in a port of a large app to Itanium back in 2002. The end result was the app ran at a THIRD of the speed - no code changes except 64-bit ness and compliation with Intel's wickedly optimized C/C++ compilers. Other than rewriting the whole app, we had to depend on the compiler for all the optimizations.

"ridiculas at FP operations" is great, but "mediocre at integer operations" is not a good thing.

Alpha and all technologies within have been purchased by Intel from HP as for MIPS I can't be sure why since they are used by SGI and I frankly don't know too much about their business other than they provide very good render farms.

Applications are software just for clarifications sake. As well I think you need to check what you are talking about with interpreted languages, from my experience languages like Java, Perl I guess even Ruby would fall under that label aren't good for anything other than web applications.

Languages that compile on the fly might be easier to debug and are generally small in footprint just don't fit the current software environments we are all currently working in.

In the end languages that need to be compiled like C++, C#, I guess FORTRAN falls in there are going to be the languages that will dominate for I dare gander... for ever.

On a side note Intel does provide excellent SDK just like their brother in arms does Microsoft. As for issues with more complicated code I think you grossly miss understand EPIC. EPIC depends heavily on its compiler to organized code and data for ideal execution, no drastic changes to the software have to be made other than 64bit safe code, such as correct pointers, constants and API support. Otherwise the compiler does a very good job or organizing everything.

I'm not saying your wrong I am simply saying you don't quite understand what you are saying.
April 15, 2006 8:38:47 PM

Quote:
I'm not saying your wrong I am simply saying you don't quite understand what you are saying.


You need to update your knowledge. MIPs + SGI haven't been used for render farms for 4-5 years. Mostly Xeons and Opterons now due to the cost and almost 100% Linux (but Windows on the workstations). I recall Pixar using 64-bit SPARC chips until 4-5 years ago too - I remember passing by the dark + cooled room. Each machine had 13 CPUs and 13GB of RAM (back in 1998). Nowadays, that's nothing.

Quote:

Languages that compile on the fly might be easier to debug and are generally small in footprint just don't fit the current software environments we are all currently working in.

In the end languages that need to be compiled like C++, C#, I guess FORTRAN falls in there are going to be the languages that will dominate for I dare gander... for ever.


Frankly this is BS. Compiled languages will be around forever, yes, but developer time is expensive and software complexity grows and grows. Writing assembly used to be common, but now it's only for very specialized code.

Python, Perl, Java, VB, etc. have all made great strides in software development proving themselves highly useful in many areas - especially for web server apps where they are "mission critical." It's not usual to see a webserver with 2, 4, 8 CPUs - what Itanium was designed for. Except then you need to write all your web server code in a compiled language (uh no way).

If you look at the state of video games, more and more game engines are using scripting languages - Lua, Python, C-variant, etc. To utilize VLIW CPUs with these "dynamic" languages, requires re-writing all implementations to be VLIW friendly. No thanks.

Quote:
EPIC depends heavily on its compiler to organized code and data for ideal execution, no drastic changes to the software have to be made other than 64bit safe code, such as correct pointers, constants and API support. Otherwise the compiler does a very good job or organizing everything.


You are quoting marketing spiel. You are also relying on all the compilers in the world (currently designed for RISC + CISC architecture) CPUs to magically turn non-parallelized code into something EPIC loves? Um... good luck. It's not easy and that's why Itanium is tanking and will tank. VLIW isn't bad, it's just too drastic a change for zero benefit - except in the high-end computing market.

I'm not saying your wrong (with respect to compilers) I am simply saying you don't quite understand what you are saying.
April 15, 2006 8:50:54 PM

This whole debate reminds me of FORTRAN vs. C/C++

FORTRAN - great for numerical computing(has many advanced math constructs built-in) and parallelizable(?), but bad for general purpose software development. Itanium has similar properties.

C/C++ - great for general purpose computing, but extra libraries are needed for numerical computing and there are difficulties parallelizing the code so it's not the best for numerical computing. x86/POWER5/MIPS/Alpha/SPARC is like C/C++

If you look at the state of GPUs(by Nvidia + ATI), a similar thing is happening. Originally they were non-branching brute force vector processing processors, but now are closer to a general-purpose CPU.

Itanium will die after another $20 billion dollars of investment and still nobody uses it.
April 16, 2006 12:03:58 AM

Quote:
I'm not saying your wrong I am simply saying you don't quite understand what you are saying.


You need to update your knowledge. MIPs + SGI haven't been used for render farms for 4-5 years. Mostly Xeons and Opterons now due to the cost and almost 100% Linux (but Windows on the workstations). I recall Pixar using 64-bit SPARC chips until 4-5 years ago too - I remember passing by the dark + cooled room. Each machine had 13 CPUs and 13GB of RAM (back in 1998). Nowadays, that's nothing.

Quote:

Languages that compile on the fly might be easier to debug and are generally small in footprint just don't fit the current software environments we are all currently working in.

In the end languages that need to be compiled like C++, C#, I guess FORTRAN falls in there are going to be the languages that will dominate for I dare gander... for ever.


Frankly this is BS. Compiled languages will be around forever, yes, but developer time is expensive and software complexity grows and grows. Writing assembly used to be common, but now it's only for very specialized code.

Python, Perl, Java, VB, etc. have all made great strides in software development proving themselves highly useful in many areas - especially for web server apps where they are "mission critical." It's not usual to see a webserver with 2, 4, 8 CPUs - what Itanium was designed for. Except then you need to write all your web server code in a compiled language (uh no way).

If you look at the state of video games, more and more game engines are using scripting languages - Lua, Python, C-variant, etc. To utilize VLIW CPUs with these "dynamic" languages, requires re-writing all implementations to be VLIW friendly. No thanks.

Quote:
EPIC depends heavily on its compiler to organized code and data for ideal execution, no drastic changes to the software have to be made other than 64bit safe code, such as correct pointers, constants and API support. Otherwise the compiler does a very good job or organizing everything.


You are quoting marketing spiel. You are also relying on all the compilers in the world (currently designed for RISC + CISC architecture) CPUs to magically turn non-parallelized code into something EPIC loves? Um... good luck. It's not easy and that's why Itanium is tanking and will tank. VLIW isn't bad, it's just too drastic a change for zero benefit - except in the high-end computing market.

I'm not saying your wrong (with respect to compilers) I am simply saying you don't quite understand what you are saying.

That didn't answer your initial question fo why MIPS are disappearing from the market. That was you trying to make a point that I already openly admitted not knowing anything in regards to MIPS downward spire with the industry.

So if what you say is true interpreted languages are only good for web applications why are you trying to argue with me on that point since we both agree?

Great strides though I can agree with when it comes to security and API support, but I can not agree that they have moved from web application software.

As well your information in regards to the machine arrays that the Itanium was designed for is additionally incorrect. They were meant for 16+ arrays you have mistaken the Xeon array targets.

And finally games have nothing to do with server CPU's.
April 16, 2006 12:09:20 AM

Quote:
This whole debate reminds me of FORTRAN vs. C/C++

FORTRAN - great for numerical computing(has many advanced math constructs built-in) and parallelizable(?), but bad for general purpose software development. Itanium has similar properties.

C/C++ - great for general purpose computing, but extra libraries are needed for numerical computing and there are difficulties parallelizing the code so it's not the best for numerical computing. x86/POWER5/MIPS/Alpha/SPARC is like C/C++

If you look at the state of GPUs(by Nvidia + ATI), a similar thing is happening. Originally they were non-branching brute force vector processing processors, but now are closer to a general-purpose CPU.

Itanium will die after another $20 billion dollars of investment and still nobody uses it.


You seem very confused C++ and FORTRAN are computer languages, the Itanium is a processor.

Additionally ILP is something the compilter takes care of not the particular language being used to write the software.

In fact niether of your examples really illustrate anything constructive to the conversation, as well I am a bit confused as to where you are thinking you are takeing this. Since it sounds like you aren't even remotely introduced to the Itanium let alone compilers or codeing.
April 16, 2006 7:45:25 AM

Quote:
So if what you say is true interpreted languages are only good for web applications why are you trying to argue with me on that point since we both agree?


No, my intention was to say interpreted languages are the future - I didn't believe this until maybe a year or two ago. This is why I mentioned game engines (having work on Xbox + PS2 consoles myself). Interpreted code has become acceptable even high-performance real-time areas.

Quote:
As well your information in regards to the machine arrays that the Itanium was designed for is additionally incorrect. They were meant for 16+ arrays you have mistaken the Xeon array targets.


Sorry I have no idea what you are talking about - machine arrays? SMP multiproc setups? Itaniums were designed to be workstations too - I worked on one and it weighed more than 4 PCs combined.
April 16, 2006 7:54:32 AM

Quote:
You seem very confused C++ and FORTRAN are computer languages, the Itanium is a processor.


Yes but I was trying to make an analogy between general purpose languages and CPUs vs. specialized languages and CPUs (which most VLIW CPUs like the Itanium tend to for specialized apps - and are too hard to optimize for generally). Most of the world is going back to general purpose CPU architecture with some extra instructions for specialized apps. GPUs are a big one I see going in that direction and Physics CPUs will too.

Quote:
Additionally ILP is something the compilter takes care of not the particular language being used to write the software.


I have a Brooklyn Bridge I'd like to sell you too. Obviously you are not a developer. To take advantage of the Itaniums all these compilers and interpreters have to be tweaked (or rewritten) to be EPIC-friendly (non-trivial). Sometimes it's easy, but with most dynamic languages it might a lot of work.

I think most people greatly underestimate the complexity of optimizing for VLIW CPUs like the Itanium. It's not using a parallelizing compiler and viola! From my personal experience with vectorizing C compilers the output is pretty generally mediocre without "marking up" the code with "parallelizing blocks"

I did a quick search on the Itanium and optimizing for it and found my thoughts echo-ed:

http://www.usenix.org/events/usenix05/tech/general/gray...

However, the most significant challenge of the architecture to systems implementors is the more mundane one of optimising the code. The EPIC approach has proven a formidable challenge to compiler writers, and almost five years after the architecture was first introduced, the quality of code produced by the available compilers is often very poor for systems code. Given this time scale, the situation is not likely to improve significantly for quite a number of years.

Itanium is too far ahead of the mundane software development technology. It will die (as with other VLIW CPUs) a horrible expensive death.
April 16, 2006 8:04:01 PM

Quote:
So if what you say is true interpreted languages are only good for web applications why are you trying to argue with me on that point since we both agree?


No, my intention was to say interpreted languages are the future - I didn't believe this until maybe a year or two ago. This is why I mentioned game engines (having work on Xbox + PS2 consoles myself). Interpreted code has become acceptable even high-performance real-time areas.

Quote:
As well your information in regards to the machine arrays that the Itanium was designed for is additionally incorrect. They were meant for 16+ arrays you have mistaken the Xeon array targets.


Sorry I have no idea what you are talking about - machine arrays? SMP multiproc setups? Itaniums were designed to be workstations too - I worked on one and it weighed more than 4 PCs combined.

No I can't agree that interpreted languages will ever take the place of languages like C++ or C#. With regards you are the only person I have ever heard that seems to believe this is the future of code development.

But hey you believe we have extra system cycles to spare on the fly compileing for Java and the like, can't wait till you get to branching and real I/O input to see the folly of the idea.

Yes I agree they were made for workstations as well but their sweet spot is 16+ since they scale perfectly with every additional machine.
April 16, 2006 8:14:26 PM

Quote:
You seem very confused C++ and FORTRAN are computer languages, the Itanium is a processor.


Yes but I was trying to make an analogy between general purpose languages and CPUs vs. specialized languages and CPUs (which most VLIW CPUs like the Itanium tend to for specialized apps - and are too hard to optimize for generally). Most of the world is going back to general purpose CPU architecture with some extra instructions for specialized apps. GPUs are a big one I see going in that direction and Physics CPUs will too.

Quote:
Additionally ILP is something the compilter takes care of not the particular language being used to write the software.


I have a Brooklyn Bridge I'd like to sell you too. Obviously you are not a developer. To take advantage of the Itaniums all these compilers and interpreters have to be tweaked (or rewritten) to be EPIC-friendly (non-trivial). Sometimes it's easy, but with most dynamic languages it might a lot of work.

I think most people greatly underestimate the complexity of optimizing for VLIW CPUs like the Itanium. It's not using a parallelizing compiler and viola! From my personal experience with vectorizing C compilers the output is pretty generally mediocre without "marking up" the code with "parallelizing blocks"

I did a quick search on the Itanium and optimizing for it and found my thoughts echo-ed:

http://www.usenix.org/events/usenix05/tech/general/gray...

However, the most significant challenge of the architecture to systems implementors is the more mundane one of optimising the code. The EPIC approach has proven a formidable challenge to compiler writers, and almost five years after the architecture was first introduced, the quality of code produced by the available compilers is often very poor for systems code. Given this time scale, the situation is not likely to improve significantly for quite a number of years.

Itanium is too far ahead of the mundane software development technology. It will die (as with other VLIW CPUs) a horrible expensive death.

I fail to see why you continue to claim it’s difficult to optimize code for the Itanium when the compiler is taking care of it not the programmer. As well you aren’t spinning code for IA-64 environments which further devalues what you are trying to say which is Itanium sucks for everything.

Your right I am not a developer but a developer in the making I am. Further more this brings me to the point that in the end I wouldn't be developing software on Itaniums to begin with and frankly I don't see you doing it either.

As per the link I fail to see the issue at all since the processor has always been promoted for HPC spin your own code environments, not gaming, not multimedia, not anything pertaining to the existing home PC market.

You are entitled to your opinion on the death of the IA-64 but time will tell not your wishful thinking.
April 16, 2006 8:28:58 PM

Quote:
I think his argument would be better stated as "it is challenging to compiler writers to develop compilers that optimize the machine level code"... he is one layer of the onion too far out :) 


Assembly writers are a different breed of programmer thought they don't whine and complain they complete and execute.
April 20, 2006 3:23:01 PM

Quote:
I think his argument would be better stated as "it is challenging to compiler writers to develop compilers that optimize the machine level code"... he is one layer of the onion too far out :) 


Exactly.

It is foolish to think it is trivial to write a C/C++ compiler that can generate optimized VLIW machine code. It's non-trivial and the most commercial products aren't that good at it. So much for "just use an optimizing compiler" argument....

The CELL(PS3) and Transmeta are both VLIW CPUs. I might be completely wrong about VLIW CPUs if somebody can design a practical optimizing C/C++ compiler (probably have because the each CPU splits one instruction into two at most and can toss one away if it can't be parallelized). I hear the CELL's subprocessors don't even have cache memory and is WAY more powerful for the same die size.

Itanium is dead if the PS3 + CELL succeed because the CELL's design will be used everywhere.
!