Sign in with
Sign up | Sign in
Your question

Dual Xeon 5160 vs Dual Opteron 285

Last response: in CPUs
Share
September 18, 2006 7:57:13 PM

I know that there has been a review of the procs in server type scenarios and cad type rendering, but what i want to know is how do these compare to each other in multimedia tasks (video encoding and games).

I think the major difference will be due to the use of FB-DIMMs in the Xeon. I always think of these test as memory intensive so the Xeon may not far as well (percentage wise) but still beat the Opterons.

Any ideas folks?

Maybe this could be the basis for an article. i think that most review sites forget that Xeon are used in the workstation market and gamers with more money than brains ( :oops:  ).
September 19, 2006 1:27:15 AM

fb-dimm's have a lot more lag then ddr ecc
September 19, 2006 5:12:29 PM

this was sent to me in a pm:

"The answer is going to depend on the type of work you do. There are basicly single precision usages and double precsion usages. Intel wins single precision hands down. The 3 simple decoders and 1 complex decoder per core give woodcrest a major speed advantage(30%). Move to double precision (scientific calculations , engineering design etc) and it reverses because the three simple decoders can't read double precison instructions . So Woodcrest is stuck with 1 complex decoder per core vs 3 complex decoders per core for the 285. Good example of the difference is found in Gauss which is the display computer for Blue Gene L at Lawrence Livermore Labs. Each node is 28% faster using two single core 152's than the dual core 2 6800 in double precision tasks per benchmarks at spec.org http://www.llnl.gov/PAO/news/news_releases/2005/NR-05-1...

NERSC finished a 3 month long 24 hour a day head to head competition between the Core2 and the AM2 Opteron. Since the workload is double precision the opteron came out ahead significantly. Since that test Intel has announced that they will not use FBDIMM with future cores . AMD dropped all plans to use FBDIMM with Budapest. FBDIMM is a great idea that didn't pan out in the real world and the 38% power consumption penalty is a real negative when you figure in power and AC costs for a server or larger application. http://www.lbl.gov/CS/Archive/news081006.html

While the theoretical peak speed of supercomputers may be good for bragging rights, it’s not an accurate indicator of how the machine will perform when running actual research codes,” said Horst Simon, director of the NERSC Division at Berkeley Lab. “To better gauge how well a system will meet the needs of our 2,500 users, we developed SSP. According to this test, the new system will deliver over 16 teraflop/s on a sustained basis.”

SSP is double precision and stresses the cpu to the max. The data is available under the Freedom of Information Act and is very detailed.

In video editing it will depend on what your criteria are . The brake point seems to be about a 50 million pixels per frame. That is equal to 5.5 30in monitors at max resolution and is the theoretical limit for single precision operations. (Similar to the 4 gb limit of addressable memory for single precision). If you are George Lucas you will want the Opteron , for the typical home editor take the Woodcrest. Standard defintion television is better on Woodcrest , High def mastering is better on the opteron because the frame execeds the 4gb of data limit for single precision. The Gauss display at Lawrence Livermore is some where around 38x10^12 pixels per second (20 million 19 in monitors running 1600x1200). Your 30in monitor is about 7.8 million pixels(7.8x10^6), a Dell 2005 is about 1.7 million pixels per sec. and a standard 19 in is 1.9 million pixels at max res.
As for the bench marks that are floating around , the only reliable ones are at spec.org. http://www.spec.org/spec/ if you see benchmarks that aren't spec standard then take them with a grain of salt. the benchmarks are either too small to be reliable tests of the cpu under full load(takes about a 100mb benchmark minimum superpi is 1mb) and there is the opportunity to tinker with the code to skew the results . oh games are low level single precision so Intel wins hands down."


I think this is very helpful. i thought that everybody would benefit from having it posted. I did not include the PM's sender because it was a pm and the sender may not want to have his/hers name in this thread (flamers being what they are).

any other opinions?
September 19, 2006 7:14:11 PM

http://www.gamepc.com/labs/view_content.asp?id=xeon5160...

In a limited set of media tests, Woodcrest is clearly faster.

Versus the Core 2 Extreme, it's slower in games, performing closer to the E6600-E6700.

http://www.gamepc.com/labs/view_content.asp?id=5160vs68...

However, the E6600 is faster in games than any AMD processor.

As for the PM, ignore him. Woodcrest can perform up to 4 double-precision floating point operation every cycle and with optimized code, can sustain a throughput of around 84% of peak. Opteron does around 91% of peak, but it can only 2 double-precision FLOP per cycle.
!