I was wondering this as I have seen fewer servers being offered with Itanium 2 processors, which may be due to the last one being released a year ago, but anyway I was wondering if there is really a point to the Itanium series anymore other than Intel and HP trying to recoup all the money they spent on it during the 90s? Xeons seem to serve the same market, with only supercomputer clusters being more Itanium focused. If I remember correctly Itaniums were vastly superior to other architectures in floating point performance, but it seems to have relegated them to supercomputer clusters rather than the original market of enterprise servers. Anyway do you think Intel spending huge amounts of money on this processor is worth it?
It gets money from markets Intel probably wouldn't be in and currently makes enough to cover its operating costs and research. Meanwhile, Itanium system sales are growing at a pretty decent clip and is the only real performance competitor to IBM's power in the high-end server market.
I'd imagine that if Intel knew how Itanium would perform on the market, they probably wouldn't have gone ahead but that money is already sunk and going forward, as long as Itanium brings in more more money than it costs to keep it going, it'll continue.
There's a certain theory that states when Itanium hits a certain adoption level, it will start taking off (kind of like Critical Mass). I guess Intel is still chasing that level.
Itanium definitely hits a different market segment. If you're looking for huge computing with a small physical footprint, it's Itanium.
One of the main reasons Itanium hasn't done very well is that Xeons are just too good. Itanium will beat Xeon processors as single units and at the same process technology, but obviously not on a performance/price ratio (and Itanium is typically 1 process technology behind as well). We can see the impact of good Xeon chips by the recent list of supercomputers (Intel gained quite a foothold). This is due in part to the new Xeon parts that have come out recently.
Xeon arch has been getting closer to Itanium and until Intel closes that gap completely, they'll support new Itanium versions. Gelsinger said Tukwila is supposed to be a big step towards that end. Currently Itanium has at least 2 major versions planned, so it'll be around for awhile.
To be honest, I haven't seen it - a 2004 Opteron 250 (@2.4GHz) is quicker than a 2004 Itanium2 (@1.6GHz)
The numbers I come up with (from my own work) is 15% difference, with Opteron being faster.
You compare that in terms of bang for buck and there is only one winner (and it ain't the itanium).
The Itanium is a *radically* different beast than the Opteron and will perform much differently on the same code. The Itanium is an in-order CPU that can retire up to 11 operations per clock tick per core. The Opteron is an out-of-order CPU that can retire up to three operations per clock tick per core. The Itanium has massive penalties for branchy code and thus depends extremely heavily on a good compiler to make it sing. The Opteron can chew through crappy code rather decently on its own.
Basically, the Opteron is a good general-purpose CPU while the Opteron is an extremely specialized one. The Itanium can embarrass the Opteron if given code that it likes; it can also look like a slug if it is given code that is branchy and has a lot of pipeline stalls. My university has an Itanium cluster as well as a standard x86 cluster, but as of now I only have access to run my work on the x86 cluster. I'd like to see how the Itaniums would do on my R code, but I doubt I'd be able to find out.
It would be interesting to see some code where the itanium does perform faster. As an example of the problems the SPEC benchmarks have varying but limited instruction level parallelism. I think perhaps the Itanium falls between a general and specialised processor which works against it.