Sign in with
Sign up | Sign in
Your question
Solved

A curious cpu question on raw compute power

Tags:
  • CPUs
  • Core
  • Intel
  • Power
Last response: in CPUs
Share
December 16, 2010 4:43:49 AM

Hello,

How does the the top of the line intel cpu core compare to the top of the line amd core in terms of raw power? I know that intel is the fastest right now, but how do the 2 cpu cores compare when one strip them of all their added tech features. For example is there a bench of the AMD Phenom II X6 1100T at 3.3GHz vs Intel Core i7-980X at 3.33GHz with hyperthreading and turbo disabled on both? Also testing using non-biased software, as some codes favor intel instructions.

Thank you!!!!

More about : curious cpu question raw compute power

December 16, 2010 2:31:49 PM

There is no such thing as "biased software". There are simply some things that one manufacturer does better than the other, and that shows in the benchmark. The best way to get a truly "unbiased" benchmark would be to run multiple tests on multiple software platforms to see how the overall picture stacks up. Wait, isn't this what Tom's does? :o 

:p 

Additionally, stripping the processor of enhancements built in to the hardware is also not a good comparison point unless you're really just wanting to analyze the cores themselves. In real world applications these processors depend on these enhancements heavily to increase performance and efficiency. It's not cheating, it's not an unfair advantage, it's an advance in technology and that's what I don't understand. Why do we need to assess the performance of the bare hardware? What is the real-world advantage?

To me, it's like eating a raw steak because you think that cooking it gives the steak an unfair advantage over other steaks. It doesn't matter what the raw ingredients taste like, it's what you do with them and the end result that matters.
m
0
l
a b à CPUs
December 16, 2010 4:08:05 PM

Actually, the best test is to write a pure x86 benchmark, instead of one that works within another X86 program (like an OS, for instance).
m
0
l
Related resources
December 16, 2010 11:21:18 PM

The point of this is just to compare how the cpu cores match up. It is by no means a comparison to say who has the best product--like the title says it is out of curiousity. Also there is such a thing as biased software--just recently there was a legal issue between the FTC and Intel:

http://www.eetimes.com/electronics-news/4205889/Intel-n...
m
0
l
January 2, 2011 8:50:03 AM

anyone have any ideas?
m
0
l
a c 102 à CPUs
January 3, 2011 1:19:22 AM

Psychoteddy said:
There is no such thing as "biased software". There are simply some things that one manufacturer does better than the other, and that shows in the benchmark. The best way to get a truly "unbiased" benchmark would be to run multiple tests on multiple software platforms to see how the overall picture stacks up. Wait, isn't this what Tom's does? :o 


No, there really is "biased software" out there. There are some pieces of software that will unnecessarily reduce performance on certain vendors' CPUs just because that vendor made the CPU, generally by using CPUID() calls to restrict code paths. PCMark05 is a good example. VIA's CPUs allow the user to change the vendor string returned by the CPUID() function; Intel's and AMD's are fixed. ArsTechnica ran PCMark05 on the VIA Nano with its default vendor string (CentaurHauls) and then repeated it with AMD's string (AuthenticAMD) and Intel's (GenuineIntel.) The CPUs' score was the lowest with its default string, slightly higher with AMD's, and a bunch higher with Intel's. Mind you that the processor itself and the program weren't changed, and the processor encountered no errors in executing the code. Intel actually got sued over this because its compiler would use the CPUID() vendor string and run very un-optimized code paths on other vendors' CPUs yet run highly-optimized code paths on their own CPUs.

Running a lot of benchmarks is a good thing, but running a lot of good benchmarks is much better. Using open-source programs compiled with an open-source compiler with known compile-time optimizations would be the absolute best way to go, since any funny business with suboptimal code paths in a compiler, funny compile-time optimizations, or program routines specifically tuned for one specific processor would be able to be found by somebody looking at the source and repeating the tests. Running benchmarks consisting of code that you can't see compiled with a probably unknown compiler with unknown compile-time options is a real black box as to how well-tuned it is for any one specific processor.

Quote:
Additionally, stripping the processor of enhancements built in to the hardware is also not a good comparison point unless you're really just wanting to analyze the cores themselves. In real world applications these processors depend on these enhancements heavily to increase performance and efficiency. It's not cheating, it's not an unfair advantage, it's an advance in technology and that's what I don't understand. Why do we need to assess the performance of the bare hardware? What is the real-world advantage?

To me, it's like eating a raw steak because you think that cooking it gives the steak an unfair advantage over other steaks. It doesn't matter what the raw ingredients taste like, it's what you do with them and the end result that matters.


Run the processor how it came from the factory, since that is how the users will most likely be using it. The only reason to not do so is if the CPU you get has features not enabled in the version normal customers will buy. Then you would want to bench it once as-is and then another time with the features not present on shipping CPUs disabled.
m
0
l

Best solution

a b à CPUs
January 3, 2011 3:33:35 AM

Benching is a fun hobby by itself.
Really in the end doesnt accomplish much for the home user
Unless you are benching for a specific kind of program you primarily use
and can get really silly how much people will spend to have the highest benches.
And in the end the Corporate Tools will really determine who the winner is...
Actually the real winner is "The Corporate Tool" with the money from OUR
upgrades
Oh well
Share
a b à CPUs
January 3, 2011 3:48:55 AM

Becuz Linux users actually have serious uses for their computers
Most Linux users by nature are hardcore techies (system admins,IT techs,programmers,scientists etc)
Remember no Crysis on Ubuntu yet
Though I do like the Tron light cycle game I must say
m
0
l
a b à CPUs
January 3, 2011 6:56:15 AM

$a = <STDIN>;
system("echo $a");
Heh Heh
Dont forget the comma
never invoke the Shell Gods!
m
0
l
a c 102 à CPUs
January 3, 2011 12:50:36 PM

Quote:
Here's a thing that bugged me. Remember when Intel and Nvidia were so called cheating with 3dmark benches because they used modified drivers? Now here's my question what's wrong with modifying software so that can make the best use of your hardware. Ain't that suppose to be the goal?


The problem is that the benchmarks are supposed to represent the general performance of the tested hardware to other hardware under real-life usage scenarios. If you optimize the drivers specifically for a benchmark, the part looks much better relative to other parts on the benchmark than it does on actual applications. Optimizing the drivers for a specific application (such as a popular game) would be fine though, and that does happen. Look at the release notes for your GPU drivers and they'll frequently contain mention of improved performance on specific applications.

Quote:
A good example of biased software are windows and the Dx api. Just have a look how a game that makes use dx 11 and all its features how it smashes a mainstream gpu which are dx 11 branded. Nvidia wanted dx 10 they pushed for it ms decided not to make a port for xp. What did it mean at the time. Back then 10 percent of the people were on vista and the rest on xp. Here nvidia brings out the Dx 10 cards everyone had to switch or forget about dx 10 which btw was more a market scheme than doing anything but revolutionizing graphics. Now its the same thing over again with dx 11.


Microsoft not supporting DirectX 10 on Windows XP was solely Microsoft's choice. It's been said that Microsoft's biggest competitor is older versions of its own software. Vista was not very well received in the market and Microsoft wanted to do anything to get people off XP and onto Vista, so not supporting the new DirectX 10 API on Windows XP was an absolute no-brainer move for them. Nobody had to switch to Vista to use NVIDIA's DirectX 10 cards as they ran perfectly fine in DirectX 9 mode under Windows XP, and the games also had DirectX 9 codepaths and would run fine on XP as well. The only people switching to Vista were really those that wanted to do so.

Quote:
And yes the Dx api can slow any hardware down. Remember the Dx 9 bandwidth issue with the gaming mouses. It dragged the cpu performance down with it till some geek nailed it down and posted the fix on his blog.

I know I'm straying off the subject but seeing all this BSOD recently happening to XP users which are all kernel related due to Microsoft Updates. They even said in february that malware messed up their one update which was absurd. Luckily the update was pinpointed and could be uninstalled. Yet no update or fix for it was released.


What did you expect from Microsoft? They are not exactly known for providing the greatest-quality software.

Quote:
The reason why I'm banging on about xp are due to the fact that its such a light OS which can be run with low specs. Yet a newer operating system with new more advance ways and suppose to be better ones can't be run by those users who who uses xp due to the low hardware requirements. ain't win 7 suppose to make better use of hardware? At the moment it looks like its putting a burden on it. And developers and gaming companies haven't even started to make use of dx 11 and its features.


Windows XP appears to be a "light" OS because it is ten years old. It was known to be a pretty heavy OS when it first debuted and people ran it on PIIIs, Athlons, and Willamette P4s with 128-512 MB of PC133 SDRAM, RDRAM, or DDR-266. Now that we're using quad-core and six-core CPUs clocked well over 3 GHz with more DDR3 RAM than the OS can even support, of course it will run fast. Newer OSes do take better advantage of the hardware as they use more of the hardware's features. Compare how well XP task schedules on multiple CPU cores or heaven forbid, a modern dual-socket workstation using NUMA compared to Vista or Windows 7. Its task scheduling on multiple cores is poor- remember all of the people jerking around with CPU affinity settings under Windows XP? NUMA is absolutely atrocious on XP, which is probably best portrayed by the ill-fated AMD QuadFX generally being slower with the second CPU installed as that required the OS to use NUMA.

Quote:
So we will never really know the true performance of anything. Software giants and hardware giants go hand in hand to force the people onto upgrades.


Not really. You can continue to run older versions of software on old hardware. That's what many businesses do as their old 2 GHz P4s still work for them, and so do their installations of Windows XP and Office XP. You can also run current programs on many of the free Unix clones on some very old hardware. A typical terminal-mode installation of a current Linux distribution only needs about 40-50 MB of RAM. Those OSes also have drivers for old hardware that is unsupported by current Windows installations, so everything will work as well. The only caveat is that you can really only run the complexity of programs you were originally able to run on the machine when it was new, though. You're not going to be able to run a full modern KDE4 desktop on a 1996 Pentium 200 with 128 MB of RAM. You could run a simpler window manager like IceWM, which is much closer to Windows 95 in appearance and functionality.

Quote:
The biggest set back for linux was when developers stopped with OPENGL. Chronicles of Riddick was promising and it really showed the potential of it. Sadly it wasn't to be


The setback was when game developers stopped using OpenGL in favor of Microsoft's DirectX. OpenGL under Linux works perfectly well. You can thank Microsoft for that one as they've tried to deprecate and cripple OpenGL on Windows at least once to get developers to use DirectX. Maybe we'll see OpenGL make a comeback with the rise of non-Windows machines like Apples and the myriad of ARM-powered mobile devices, almost all of which use OpenGL/OpenGL ES if they have any 3D capability (and most do.)

Quote:
exactly. Why don't we see any benches on linux?


Because Tom's is predominantly a gaming-oriented site and very few AAA-list games run natively on Linux. The last one I can think of is Enemy Territory:Quake Wars (which I have :D  ) and that came out some time ago. A good website for Linux benchmarks is Phoronix.
m
0
l
January 4, 2011 2:38:47 AM

Thanks for all of the comments :D 
m
0
l
January 4, 2011 2:40:06 AM

Best answer selected by meebo.
m
0
l
a b à CPUs
January 4, 2011 11:31:36 PM

meebo said:
Thanks for all of the comments :D 


Thank you for selecting me as Best Answer.
Happy New Year if it applies.
Good thread
m
0
l
a c 152 à CPUs
a b å Intel
January 4, 2011 11:45:14 PM

PhenomII<I7<PS3

hows that?
m
0
l
!