Sign in with
Sign up | Sign in
Your question

Updated SPEC benchmarks

Tags:
Last response: in CPUs
Share
June 29, 2003 6:45:54 PM

I've read <A HREF="http://www.amdzone.com/articleview.cfm?ArticleID=1296" target="_new">amdzone's</A> article on the G5 and its SPEC CPU 2000 benchmark scores; however, I thought it was amusing to see that even amdzone only considered 3.06Ghz-level scores to put next to XP3200's. I've checked some interesting CPU2000 results at spec.org (btw, that search engine gave me some trouble) and here's what I've found, in case anyone is interested (if you're not, ignore this!)(all numbers are base)

<b><font color=blue>Intel CPUs</font color=blue></b>
<i>3.0Ghz P4</i>, Int <b>1164</b>, FP <b>1213</b>
<i>3.06Ghz P4</i>, Int <b>1099</b>, FP <b>1092</b>
<i>1.0Ghz Itanium (McKinley)</i>, Int <b>810</b>, FP <b>1431</b>
<i>1.5Ghz Itanium (Madison)</i>, Int <b>1318</b>, FP <b>2104</b>
<b><font color=green>AMD</font color=green></b>
<i>XP3200</i>, Int <b>1044</b>, FP <b>873</b>
<i>1.8Ghz Opteron 144</i>, Int <b>1095</b>, FP <b>1122</b>
<b><font color=red>Apple</font color=red></b>
<i>G5 2.0Ghz</i>, Int <b>840</b>, FP <b>800</b>
<b>IBM</b>
<i>Power4 1700Mhz</i>, Int <b>1113</b>, FP <b>1598</b>

<b>====UPDATE====</b>
<b>Update: Completely new figures for 1.3 and 1.5Ghz Itanium 2</b>
Itanium 1.3Ghz: int 875, fp 1770;
Itanium 1.5Ghz: int 1077, fp 2041;
8xItanium 1.3Ghz: int 79.4, fp 141; (rates)
16xItanium 1.3Ghz: int 158, fp 278; (rates)
32xItanium 1.3Ghz: int 311, fp 541; (rates)
64xItanium 1.3Ghz: int 601, fp 1053; (rates)
4xItanium 1.5Ghz: int ???, fp 82.2; (rates)
8xItanium 1.5Ghz: int 98.3, fp 164; (rates)
16xItanium 1.5Ghz: int 195, fp 327; (rates)
32xItanium 1.5Ghz: int 385, fp 644; (rates)
<b>Update: Comparative Opteron rates</b>
4x844: int 46.1, fp 44.2; (rates)
4x842: int 41.5, fp 40.6; (rates)
4x840: int 37.4, fp 37.3; (rates)
2x244: int 24.2, fp 24.7; (rates)

<i><b>Update: New SPEC scores for Madison have appeared on SPEC's database. Bear in mind that they come from SGI, which is a company that traditionally gives lower SPEC results than HP. </b> HP already posted considerably better results for Integer operations, and those are in the "intel CPUs" section above... And also, SPEC rates have been benchmarked too, for those of you interested in scalability. Included here for convenience are the prices of those processors:
<b>
Opteron 840, $749
Opteron 842, $1299
Opteron 844, $2149
Opteron 244, $800±50
Opteron 144, $670 ( :smile: !!!)
Itanium 1.3Ghz, $1200 ( :smile: ... not bad)
Itanium 1.5Ghz, $4200</b></i>

<b>====End of Update...====</b>

I've found those numbers to be very interesting... Note that, while the XP3200 is faster than the 2.0Ghz CPU used in G5, the 3.0Ghz P4 is also considerably faster than the XP3200. I've read it somewhere that the 3.2Ghz P4 scores around 1250 or so in both FP and int base... so the 3200 is no match.

Of course, the other processors are for different market segments, and we should keep that in mind when looking at the numbers...

Anyway, going over to the server cpus, Opteron looks good. In fact, it looks very good, but just not invincible-good.

Anyway, what can we really expect from A64, if launched at 2.0Ghz? I'd say that Opteron's architecture is already a good indication of A64 performance levels (am I thinking right here?...), so a 2.0Ghz wouldn't score much more than 20% over the 144 Opteron, or around 1300 or so... which is an excellent score <i>today</i> and is more than enough to compete with the 3.2Ghz northwood, but what about prescott?...

Then again, this <i>is</i> just one particular benchmark and, as such, shouldn't be considered the "final truth"...
What do you people think?... is SPEC a good indication of performance?... it's synthetic anyway.
<P ID="edit"><FONT SIZE=-1><EM>Edited by Mephistopheles on 07/01/03 10:14 PM.</EM></FONT></P>
June 29, 2003 8:07:33 PM

yes for Intel and AMD.As IBM kind of cheat as normaly L3 cache is share betwwen module in spec as only 1 module is test all the cache go to 1 module unlike real world that will split in 8 this huge 128 MB L3
Apple score is not made with fastest compiler you can expect a 1000 int and 900 FPU from the chip.

Madison score from SGI on linux 1050 and 2050 but SGI on 1 ghz mackinley offer 600/1200 GCC is slower that HP compiler they should reach 1200/2100 maybe more or less and 2200 FPU on intel compiler.

P4 3.2 scoe in the 1220/1220 range for bolt

[-peep-] french
June 29, 2003 8:18:20 PM

yes, I saw that on spec's site... the ibm setup had a completely different cache architecture...

so a 1.5Ghz, 6MB cache Madison might score about 1150±50 int_base and 2150±50 fp_base? that floating point performance looks good... and the int performance has caught up excellently... we'll hopefully be seeing more on that on the net in the following weeks...

Deerfield might just be an excellent choice for the workstation market... let's see how well it stacks up.

IA-64 is using a rather impressive design, isn't it?... a 1.5Ghz chip has a godly FP performance... talk about IPC...
Related resources
June 29, 2003 10:00:37 PM

So when are you buying Madison?
June 29, 2003 10:28:06 PM

IM sure he has the 4k to shell out for a processor which has roughly 300 applications for
June 30, 2003 12:53:12 AM

Deerfield might just be an excellent choice for the workstation market... let's see how well it stacks up.

At some software only under 3DSstudio and the like still performing largy under P4 but Xeon MP is also slower that plain Xeon even with there large cache is some software that there workload is not suitable to IPC clock speed will be king.

Actualie about 4 to 6 time better IPC that a P4.the core is good but need faster access I/O.

[-peep-] french
Anonymous
a b à CPUs
June 30, 2003 1:12:04 AM

Don't forget SPEC is much a compiler benchmark, as it is a CPU benchmark; Intel's ICC compiler is head and shoulders above the others (especially SSE2 vectorizing), which makes the Intel cpu's score so well on SPEC. Commercial software compiled with ICC might (and often does) benefit from this speed boost, so its not invalid to look at SPEC, but please note the large majority of software is compiled with GCC or MS compilers, not ICC. When you look at SPEC scores compiled with one of these compilers, you see an entirely different picture.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
June 30, 2003 2:48:26 AM

I wouldn't say at all the Integer performance is bad. Consider it is 1GHZ and is giving out 800. Imagine at twice the clock, it's already much ahead of the competition.
However, the clock speed limits that, and if they do improve it, it should not be 50% better only (clock speed increase as well), since you got the clock speed and cache.

--
If I could see the Matrix, I'd tell you I am only seeing 0s inside your head! :tongue:
June 30, 2003 3:04:55 AM

compiler is a part of CPU performance like nividia program ''the way it mean to be'''

ICC produce about 10 to 15 % faster int code compare to average compiler.GCC is not to bad.On X86 all corporation use intel compiler opteron is test with ICC when if there some 64 bit compiler for X86-64.

[-peep-] french
June 30, 2003 3:51:52 AM

Um the nVidia program is nothing but a way to market their GPUs and a way to prevent programmers from coding for all platforms!
I'd say compilers are in a completely different class, something legal and honest.

--
If I could see the Matrix, I'd tell you I am only seeing 0s inside your head! :tongue:
June 30, 2003 2:22:30 PM

I'm not considering buying Madison myself.

I work at a physics institute and we write our own software for our own purposes and compile them.

I'm interested in its capabilities only as a tool for doing research. Programming our own experiments and simulations on it, that is all. We're perfectly capable of recompiling our code if needed. And we can learn to use a processor to its fullest.
Quote:
IM sure he has the 4k to shell out for a processor which has roughly 300 applications for

What were you thinking? That I plan on running Doom 3 on Madison? Of course not. Madison is a server/high performance solution, and you're thinking <i>desktop</i>.

And it's a departmental/institutional purchase and many people might be using the computer. How do you guys think that institutions purchase Xeon racks? With spare change? (btw, we've got a few of those around too)
June 30, 2003 2:40:42 PM

Most of the high-end systems are built to do one or two things specifically. The lower down the "food chain" in terms of market, the bigger install base you have in terms of applications. Server markets have traditionally not had a very difficult time transitioning to new architectures because there isn't such a huge install base of software to be ported as there is in the desktop world and server software companies are usually very quick in porting their software.

"We are Microsoft, resistance is futile." - Bill Gates, 2015.
June 30, 2003 3:35:19 PM

Quote:
The lower down the "food chain" in terms of market, the bigger install base you have in terms of applications.

That makes sense, and that's what I was talking about. The Xeon racks we have around here don't have a large software base at all; in fact, they mostly have an operating system, that's all. You can then use them all remotely with any software that anyone around this physics institute cared to write. You can also recompile.

But you're definitely not going to use any proprietary software at all. Red hat linux and open source programs only. Completely different from "lower down the food chain".
June 30, 2003 4:27:11 PM

At québec there a new super computer for Montreal university there only the os on it you can rent the computer for X time and test what you for free if your are coming from university as a sientific.Share supercomputer for 4 university

[-peep-] french
June 30, 2003 4:35:22 PM

yes ISV support in exchange for good coding for Nvidia card and bug free.

Nerverwinter Night was a good exsemble of nidia grip.a Radeon 9700 is as fast as a geforce2 and DX 8 is disable on the game if you dont use a nvidia card.

This part of buying a nividia card you buy more that the hardware itself but the driver the game support and a warrenty that most game and app will run fine and fast.

Like Intel use there compiler create guide for optimization for XEon itanium also there chipset have a edge.

Spec it a platform test.Compiler CPU chipset OS need to be good at every thing.

[-peep-] french
June 30, 2003 8:29:18 PM

That is false. nVidia made sure NWN would NOT be coded at all for ATi. There is a difference between encouraging coding for your platform and telling the programmer to shut coding for any other platforms. This is not competition, this is unfair cheating.
Just like the drivers recently, nVidia is only cheating, not providing fair competition.

This way it's meant to be played BS is just PR from nVidia, and anyone who falls for it has to be a damn fool or nVidiot.

--
If I could see the Matrix, I'd tell you I am only seeing 0s inside your head! :tongue:
June 30, 2003 8:40:01 PM

yes cheape move sure but i dont care i want my game to work.It was working but i am not suppost to read faq to make it work.Does the programer have put anti ATI code i dont think so.

Nividia cheating yes but ATI Quarck issue buggy driver for 8500 on AMD in the early day.

Bolt have a poor record lately as the target audiance are younger and noting relate to corporation have you hear cheating or rabing on Quadro or Fireopengl no cuz they are relate to corporation.Do you hear intel babling or AMD almost never specialie on Intel side they never leak something before they need they dont create hype they stay professional.AMD should do the same and control there Fan the little story dont help AMD selling big systemes on the market those It maneger are sensitive to reputation

[-peep-] french
June 30, 2003 8:47:48 PM

Quote:
yes cheape move sure but i dont care i want my game to work

Two things:
1) Sure, if you like clippy graphics, low quality images, just for better performance.
2) Most GF FX owners have had nothing but instabilities operating their lovely "stable and high quality" NV3x.

Quote:
Nividia cheating yes but ATI Quarck issue buggy driver for 8500 on AMD in the early day.

Difference being ATi did it once, ADMITTED TO IT, and stopped. nVidia still cheats, still DENIES, and refuses to remove the detection cheats. ATi removed them in Catalyst 3.5. If nVidia continues, I won't care at all if they go under, it was their own undoing. Us consumers will only buy from the companies that deliver solid products and provide us with some reliability like with drivers. Can the same be said about nVidia?

--
If I could see the Matrix, I'd tell you I am only seeing 0s inside your head! :tongue:
June 30, 2003 8:58:59 PM

) Most GF FX owners have had nothing but instabilities operating their lovely "stable and high quality" NV3x.

I dont own a Geforce FX

Sure, if you like clippy graphics, low quality images, just for better performance

That what i got on NWN on my ATI on my geforce it run fine.



Difference being ATi did it once, ADMITTED TO IT, and stopped. nVidia still cheats, still DENIES, and refuses to remove the detection cheats. ATi removed them in Catalyst 3.5. If nVidia continues, I won't care at all if they go under, it was their own undoing. Us consumers will only buy from the companies that deliver solid products and provide us with some reliability like with drivers. Can the same be said about nVidia?

They have cheat or not is not my point the point is ISV support more Nvidia more that any others.Also cheat were present only in 3Dmark2003.

[-peep-] french
July 1, 2003 12:55:01 AM

nVidia support is starting to fall, that's how I see it.
Their DirectX 9 library is weak in performance. CineFX isn't succesful at all because it is proprietary.
ATi plays by the general rules. Who will be chosen in the long run?
We'll see soon, but this "The Way It's Meant to be Played" alliance is bullcrap, even UT2003 isn't THAT much pro-nVidia and I'd be hard-pressed to believe it has ANY nVidia optimizations that ATi didn't get.

--
If I could see the Matrix, I'd tell you I am only seeing 0s inside your head! :tongue:
July 1, 2003 1:08:36 AM

We'll see soon, but this "The Way It's Meant to be Played" alliance is bullcrap, even UT2003 isn't THAT much pro-nVidia and I'd be hard-pressed to believe it has ANY nVidia optimizations that ATi didn't get

But fast to not put any SSE instruction to get sure AMD look good

[-peep-] french
July 1, 2003 1:27:18 AM

I didn't get that.

--
If I could see the Matrix, I'd tell you I am only seeing 0s inside your head! :tongue:
July 1, 2003 1:29:42 AM

Hm... I think he was talking about the Unreal engine and how it always tends to favor AMD in benchmarks. (because of lack of SSE2 or something, I suppose).
July 1, 2003 1:31:44 AM

If it supported SSE, it would support Pentium 4 as well, logically.
Even then, I was not aware that SSE and SSE2 had THAT much of an impact in games. Have you seen the latest "improvements" by such? Are there? Anyone got proof that they can help a lot?

--
If I could see the Matrix, I'd tell you I am only seeing 0s inside your head! :tongue:
July 1, 2003 1:42:31 AM

Well, under games, I don't really know, because they're mostly not open code, and can't be recompiled into different versions for testing at all. (that would be illegal)...

So it's hard to tell. Aceshardware has put an interesting <A HREF="http://www.aceshardware.com/" target="_new">news article</A> which compares various compilers with and without P4's SSE2 optimisations, when creating a floating point benchmark (which is a logical area to look for differences, as the P4's FPU isn't the greatest and could use help). In that article, you can see how enabling SSE2 makes a huge difference when using Intel compilers. I do not know, however, which compiler is mostly used for games. But using an Intel one should make a huge difference. Using GCC, for example, will cripple Intel's platform rather considerably. Unreal Tournament could, for instance, use GCC instead of Intel's compiler. And what is even more "conspiracy-theory" would be to consider that AMD might even have paid for the UT developers to do so... would make sense in my mind.

What kind of operations does a game do, anyway? More integer operations or more floating-point ones?... I can't actually get a grasp on that... But I would tend to think that they're more floating-point-oriented.

<i>Edit: If you're interested, but don't want to check aceshardware, you can just check <A HREF="http://www.aceshardware.com/files/news/images/flops_com..." target="_new">this graph</A>, which compares how much floating point performance can be extracted from the P4 using different compiler and different compiler settings. Compilers from intel, GCC and Microsoft's Visual Studio are compared in that graph. Check it out, it's interesting.</i><P ID="edit"><FONT SIZE=-1><EM>Edited by Mephistopheles on 06/30/03 10:11 PM.</EM></FONT></P>
July 1, 2003 3:20:09 AM

Games require lots of integer for score-tracking and big number displays, but for the most part, it's float. 3d games with major detail require major precision and FP emulation (or whatever the name was, that made the N64 do 3d in Integer only) would be bad in image quality and slow. Especially proof of float is the huge talk around FP Precision on the current generation of video cards, and nVidia's allegations of using 16-bit FP precision over 24-bit, the minimum required by DX 9 compliancy tests. And it's actually 128-bit precision, though I dunno why they then refer to them as 16,24 or 32.

--
I am my own competition. -VJK
July 1, 2003 5:05:10 AM

Have you read the discusion on the boards how people were fast to defend GCC that a shame it where a major clean win for Intel compiler.Frank the guy who compile the workload have try all the given ''tip'' that were give before by others ''guru'' of Gcc none been able to produce faster score in overall.Intel compiler were also the fastest on no SSE vs any others including others compiler with SSE-2 optimization let alone the satgering 3X time lead when is own SSE-2 is up at a nice 1.8 Gflop in many test.

To clear up ''even me cannot read it'' post from before all unreal never feature SSE code only plain X86 powerful 3.2 canterwood are still in the 100 FPS range it not relate to how is the huge game engine but as badly it work a P4 3.0 on NWN reach only only 60 FPS even with the best graphic card.

Bad programing on unreal it was intentional.

[-peep-] french
July 1, 2003 5:23:49 AM

On video cards, the precision is called "128-bit" because all data values (pixels) are dealt with in vectors (4 32-bit FP values, a red value, a green value, a blue value and an alpha value). They are massively parallel SIMD processors (in fact, very similar to IA-64's design).

"We are Microsoft, resistance is futile." - Bill Gates, 2015.
July 1, 2003 3:57:02 PM

This relate to DX9 was speaking of vector instruction support.

IA-64 almost not vector ops it use a inner loop if need.

I dont like french test
July 1, 2003 9:07:03 PM

Most graphics card's native ISA is VLIW. It isn't SIMD per se but it functions like it most of the time. In that sense, it is a vector processor.

"We are Microsoft, resistance is futile." - Bill Gates, 2015.
July 2, 2003 2:00:54 AM

Um, I think you have a word for what I'm about to do...

*bump*

There you go. I updated the relevant SPEC benchmarks on the first message. Interesting stuff there... I think Eden was a little worried that scalability on Itanium was worse than on Opteron? From rate numbers, Opteron doesn't scale that better than Itanium at all. Both scale well, with Itanium looking as if it'd scale a bit better...

And with the 844 costing $2149, you can see that a 1.3Ghz Itanium has rather good performance for its price, particularly if used in multiple-cpu configurations.

Then again, the single-processor 144 is probably the best price/performance for a long shot on single-processor workstations, no doubt about that. Oh, and with single-CPU workstations that don't need 64-bit, the even cheaper and quite good 32-bit 3.2Ghz P4 CPU is more than enough.

Anyway, lately, I've been evaluating Itanium in comparison to Opteron here, but just as a reminder, I am aware that 32-bit code runs easier on Opteron than on Itanium (though someone said this Itanium will run 32-bit a little smoother...) and that that is one of Opteron's main strengths.
July 2, 2003 4:33:16 AM

you mean a VPU yes posible it make sense but to say a Itanium is a vector CPU no.

I dont like french test
July 2, 2003 4:39:26 AM

VPU = Vector Processing Unit....

"We are Microsoft, resistance is futile." - Bill Gates, 2015.
July 2, 2003 4:48:06 AM

Also Visual Processing Unit, to be picky.

--
I am my own competition. -VJK
July 2, 2003 4:53:34 PM

I will like to discuss about that but my knowlege of internal fonction of Graphic card is close to zero.

I dont like french test
!