Sign in with
Sign up | Sign in
Your question

Does AMD's Athlon 64 X2 6000 Have Any Kick Left? - Page 3

Last response: in CPUs
Share
April 22, 2007 7:46:23 PM

grrr trolls,fans

@mapesdhs i agree with you

i was not starting an debate last page of posts had nothing to do with this topic

Fact c2d at this time is faster then an amd64 chip we all know this

if i was going to be Makeing an NEW system i would of used C2D but i all ready have the computer just need an chip and just for £100 i can upgrade to an 6000+ X2 (- the £40-£50 for selling my 3800+ X2 aka why i put £100 before) my games will run nice an smooth with that chip and so it would with an e6600 but TBO if the Frame rate is more then 60FPS or the game is not stuttering, both of the CPUs will do it just fine

this system will keep me happy now for another yr or so with DX10 gameing comming soon CPU will no longer be the limiting factor agane for some time it be back to the GPU
we got the 8900 and the r600(x2800) comming soon be intresting what thay got to offer

(some one going to find an intresting way to flame this from the last 5-6 posts from 3 users)
a b à CPUs
April 22, 2007 9:21:51 PM

r0ck writes:
> Where are these forums telling you DDR 667 is wimpy? Maybe your Xeon Dell sucked
> because 1. Xeon systems aren't for gaming (FBDIMM??)

I originally bought it for video encoding, and because it was very cheap. At the time,
I'd never heard of Oblivion, though I'd been looking forward to Stalker for a long time.
The X1950 came out at about the same time I found out about Oblivion, so it seemed
like an ideal way to be able to play exactly the kind of games I like to play. It's a nonsense
to say XEON systems aren't for gaming; that's obviously not the market they're aimed
at, but there's no reason why one can't use a XEON system for games.

The Dell didn't really suck btw, it actually wasn't that bad (eg. 3DMark06 = 4605),
and Oblivion frame rates were pretty reasonable at 1024x768 with 4X AA and 8X A.F,
eg. outside areas were around 25 to 50 fps. I merely discovered while upgrading my
brother's system to an o.c.'d Athlon64 3400+ (2.64GHz), DDR400, with the same gfx,
that it would be decently faster with a newer RAM standard, confirmed by running
numerous other tests - everything from Sandra and FutureMark to Cinebench and
my own 'real-world' fps notes from Oblivion plus some video conversion timings using
VirtualDub and TMPEGEnc. The Precision 650 has PC2100 ECC.


> 2. It's a Dell.

To be fair, the Dell workstation series have generally always been pretty good, eg. the
current Precision 690 is a bit of a monster (2 x quad-core 2.66GHz 2x4MB XEON
X5355, 1333 FSB, _quad_ channel DDR2/667 memory, up to _64GB_ RAM, dual
NVIDIA Quadro FX SLI [though it could take GF8800 SLI instead of course], SAS
storage, etc. Hehe, a maxed-out 64GB RAM system costs $70000 though. :D  Or
there's the lesser Core2Duo-based 390 series which takes a quad-core QX6700.
As 'professional' workstations go (eg. ECC RAM for better memory reliability)
they're not bad at all, easily up there with HP/Compaq and IBM Intellistation. Infact,
the 690 tops the Viewperf9 list with a dual FX4600 SLI.

If you're referring to the Dell consumer systems, well yeah it wouldn't surprise
me if they're a bit naff. Until upgrading my brother's system though, my exposure
to PCs had mostly been with 'workstation' systems (SGI VW320/540, dual-XEON
Intergraph WIldcat, etc.) where I used to work as head admin (www.nicve.salford.ac.uk).

At a time when a good 650 system would normally go for $1600, I bought two of them
for $1000 and sold one of them within 2 weeks for $1000, so the system effectively
cost me nothing. Base spec was dual-XEON P4/2.66 (512K L2 per CPU, FSB533),
2GB PC2100 ECC, Quadro4 900XGL, and a Seagate 36GB SCSI U320 68pin
connected to onboard LSI U320 PCIX RAID. The main change I made was replacing
the 36GB with 3 x 147GB 15K U320 Fujitsu (200MB/sec, avg. access time 1ms!).
I bought the X1950Pro AGP about 2 months later, plus a second X1950 for my
brother's system for xmas.

The only thing I'll really miss is the PCIX. Can't get the same speed out of the lame
32bit/33 PCI slots that all consumer mbds seem to have these days. I did look for
one, but no luck (I don't mean there isn't one period, but there weren't any available
from UK sellers).


> http://www.anandtech.com/memory/showdoc.aspx?i=2800&p=7
> Focusing just on games, 800 has 1.02% advantage for Conroe. Surprisingly, for
> AM2, it's just 1.07%.[/quote]

None of those tasks/games are ones I'm interested in, so the results aren't useful
for me, plus the article doesn't consider the RAM overclocking which inevitably
happens when pushing a 6000+ above stock speeds.

I'm more than happy with the results I'm getting, and looking at example results
for Core2/X1950Pro systems, I'm not losing out at all by having a 6000+ instead.
Remember that right now I don't have any fancy cooler so the o.c. to 3.12GHz is
actually forcing the RAM to run at less than 800 speed (780 to be precise). Thus
for example:

http://tinyurl.com/36pu66

there is:

[code:1:c359943265]
CPU : Intel Core 2 Duo E6400 at 3448MHz
Card : HIS Radeon X1950 PRO 256MB
Speeds : 654MHz Core/ 1586MHz Memory

Catalyst Driver : 6.9
3DMark05 : 11845
[/code:1:c359943265]

I currently get 3DMark05 = 11094 with the CPUs running at just 3.12GHz, so it
seems to me my AM2 system is running very well (and my GPU timings are lower,
ie. 641/1566).

And on this page:

http://www.hardware.info/benchmarks/cm1x/view/

the fastest X1950Pro result listed (with a 3.6GHz P4) is 11400, so I think the 11094
I've obtained is spot on what I could expect to get from what is possible with this
card; again I see no evidence of any significant benefit in having a Core2 instead.

And here's a review of an X1950Pro AGP running with an overclocked E6600 to
3.33GHz, PC6400, etc.:

http://tinyurl.com/2kx5hx

so compare the results:

[code:1:c359943265]
E6600/3.33GHz E6600/3.33GHz 6000+/3.12GHz
(580/702) (640/716) (641/783)

3DMark03: 18055 [not shown] 17810
3DMark05: 9889 [not shown] 11094
3DMark06: 4608 5534 5571
[/code:1:c359943265]

My system beats the o.c.'d E6600 for the newer 06 test!

I found site after site with results like this. They prove conclusively that I would
have obtained _no_ significant benefit from having an overclocked E6600
system instead. On another site, the only Core2 X1950Pro setup that beat
my 6000+ was a system using water cooling for everything and thus had
much better GPU speeds.

QED.

Ian.

PS. Yes, of course the numbers would be better with an E6700, but as I said
before that chip was 100% more expensive and thus not an option (305 UKP).
a b à CPUs
April 22, 2007 9:28:39 PM

leexgx writes:
> if i was going to be Makeing an NEW system i would of used C2D but i all ready have the

I absolutely agree. If I didn't already have an AGP X1950, I'd have bought parts to make
an E6600 system with one of the newer less-costly 8800 cards instead.


> computer just need an chip and just for £100 i can upgrade to an 6000+ X2 (- the £40
> -£50 for selling my 3800+ X2 aka why i put £100 before) ...

Funny you should mention that - my CPU budget had originally been 100 aswell, but
the day before I was planning to buy everything I sold a power supply to the US Air
Force in Abu Dhabi for 40 UKP; that same day I found out about the AMD price drops,
and voila I could afford a 6000+.


> my games will run nice an smooth with that chip and so it would with an e6600 but TBO
> if the Frame rate is more then 60FPS or the game is not stuttering, both of the CPUs
> will do it just fine

Spot on!


> we got the 8900 and the r600(x2800) comming soon be intresting what thay got to offer

Do you know if there will be AGP versions of the new ATI cards? Or is the 1950 the
last AGP product?

Likewise, I'll be ok for a long time. Oblivion & Stalker on the new setup will do me just fine
for at least a year, by which time I'll have a PS3 anyway (waiting for Mercenaries2
and GTA4).


> (some one going to find an intresting way to flame this from the last 5-6 posts from
> 3 users)

I'm sure they'll find a way. :) 

Ian.
Related resources
April 22, 2007 10:01:50 PM

"None of those tasks/games are ones I'm interested in"

They're more relevant than 3DMark. But ok.

"the fastest X1950Pro result listed (with a 3.6GHz P4) is 11400, so I think the 11094
I've obtained is spot on what I could expect to get from what is possible with this
card; again I see no evidence of any significant benefit in having a Core2 instead."

GPU bottlenecking? Notice the load released with the quads ahead?

"They prove conclusively that I would
have obtained _no_ significant benefit from having an overclocked E6600
system instead."

Of course not, not on a GPU test.

The important thing is that you could've spent similar for similar performance. And spared Mother Nature http://www.techreport.com/reviews/2007q2/core2-qx6800/c... 80 watts :D  Happy Earth Day.
April 22, 2007 10:26:28 PM

Quote:
Do you know if there will be AGP versions of the new ATI cards? Or is the 1950 the
last AGP product?

ATI has stated that their PCIe to AGP bridge chip is compatible with the R600 (new GPU). That is no guarantee that anybody will actually build such a product, but why else would they make a point of stating that the bridge would work? Still, if the pricing of those is true to past high-end AGP cards you might be able to but a PCIe motherboard and the PCIe version of the card for the price of the AGP version.
a b à CPUs
April 23, 2007 1:00:02 AM

r0ck writes:
> They're more relevant than 3DMark. ...

Not to me. I've never said my comments apply in a general sense, only to my situation.
Like I said earlier, the 3DMark tests were useful to me because the way they behave
closely reflected the differences between my XEON system and my brother's Athlon64,
combined with other tests.

What's the point in basing purchasing decisions on game test numbers that are for
games I don't play?


> GPU bottlenecking? Notice the load released with the quads ahead?

Then explain why the overclocked E6600 gave pretty much the same 06 results
as my 6000+ with the same GPU (actually slower than my 6000+, but my
system's GPU is running at a higher RAM speed), significantly so for the 03/05
tests.

In this thread people have been saying a Core2 system would have been
distinctly faster, eg.:

[code:1:f36a8fa738]
"Compared to the competition, the X2 6000+ fails in all areas: Price/performance,
heat, overclocking."
[/code:1:f36a8fa738]

but that's clearly not the case, at least not when the gfx is an X1950 Pro.
And the oft-used comment that 3DMark isn't relevant doesn't apply here
since most review sites now say the latest 06 test is a worthy comparison
given the complexity of the latest games.

It seems to me that when a test result supports someone's claim that a Core2 is
'better' then they use it, but when the same tests show it's no better or actually
worse, then all of a sudden that test is deemed irrelevant.

I've stated which tests I focused on and why. I don't care if other people do not
find them relevant for their situation. I did, and that's all that matters with respect
to making a purchasing decision. Sandra was useful because it allowed me to
examine the different system elements in isolation, something which is an important
aspect of benchmarking and locating bottlenecks. Analysing a system is best done
both top-down and bottom-up.


> Of course not, not on a GPU test.

And running Oblivion/Stalker with decent quality settings is precisely that,
ie. more GPU intensive. Having a good CPU simply means the system can
feed the GPU at a decent rate. Thus, once again, what I've ended up with
is a system that is not being held back by the CPU for the _tasks which are
important to me_.


> The important thing is that you could've spent similar for similar performance. ...

Wait a minute, up until now you've been saying I could have bought _better_ for
similar money, not just similar for the same money. Now all of a sudden it's only
similar? :D  That contradicts earlier comments where people were insisting that
an overclocked E6600 would give me genuine benefits (not just on this forum I
might add), but now that I've shown it wouldn't have, all of a suddent the claim
is only similar for similar - if that's the case, then who the hell cares what CPU
choice I made? :D 


> And spared Mother Nature
> http://www.techreport.com/reviews/2007q2/core2-qx6800/c... 80 watts
> :D  Happy Earth Day.[/quote]

*grin!* Trust me, with the SGI stuff I have to do each day, having a PC plus or
minus 40W will make zero difference. The bootup time of my 24-CPU Onyx rack
has a much bigger effect on my power bill. :D  (chews 2400W) And for all the
juice a Core2 might have saved me, it's swamped by the RAID racks I'm running
at the moment re tests for a movie company (36 x 73GB, 3 x QLA12160,
511MB/sec, 55fps HD 1920x1080 uncompressed 4:4:4:4).

Cheers! :) 

Ian.
April 23, 2007 1:09:24 AM

If it's similar performance with nicer wattage, that is better overall 8) Have a nice day and happy 3DMarking.
a b à CPUs
April 23, 2007 1:13:28 AM

Senor_Bob write:
> ATI has stated that their PCIe to AGP bridge chip is compatible with the R600 (new GPU).
> That is no guarantee that anybody will actually build such a product, but why else would they
> make a point of stating that the bridge would work? ...

Yes, good point, though perhaps they just want to publicly keep their options open.

I guess it depends on whether or not the manufacturer's think there would be
sufficient demand.

The one shame of the X1950 Pro AGP is that it didn't launch with a more expensive
option of the same setup but with 48 pixel shaders instead of 36. I would have paid
a bit extra for that. As it is, the GPU/MEM clock differences to the XTX are largely
eliminated by having a good cooler and overclocking.


> Still, if the pricing of those is true to past high-end AGP cards you might be able to but
> a PCIe motherboard and the PCIe version of the card for the price of the AGP
> version.

Very true. :D 

Ian.
April 23, 2007 4:32:05 AM

heh going to get the 5600+ X2 ( 2.8ghz ) and get that to 3ghz (FSB 215) its like £40-£50 cheaper and will do 3ghz strate away

CPU GPU limit things
anything below an x1950/7900GTX SLI is GPU bound once the cpu is fast (thats why its point less doing an CPU tests with an Pre 1950)
alot of reviews around when the 6000+ X2 was made thay was useing ATI 1950 cards and was not propper results as the GPU limted the tests (FPS was basicly the same on HighQ ingame settings)

anything above r600/8800 GTS 640mb is basicly CPU bound on DX9 programs/games (unless the video settings is set to insane setting more res and AA then you start to push the card to its limits or dx9 limits)

@mapesdhs
lol nice setup you get there you must mess/use with some intresting stuff
April 23, 2007 4:35:46 AM

Quote:
Senor_Bob write:
> ATI has stated that their PCIe to AGP bridge chip is compatible with the R600 (new GPU).
> That is no guarantee that anybody will actually build such a product, but why else would they
> make a point of stating that the bridge would work? ...

Yes, good point, though perhaps they just want to publicly keep their options open.

I guess it depends on whether or not the manufacturer's think there would be
sufficient demand.

The one shame of the X1950 Pro AGP is that it didn't launch with a more expensive
option of the same setup but with 48 pixel shaders instead of 36. I would have paid
a bit extra for that. As it is, the GPU/MEM clock differences to the XTX are largely
eliminated by having a good cooler and overclocking.

Not that I even remotely suggest you upgrade to it, but Gecube does offer an X1950XT in AGP, which is the champion of the present AGP cards, and has the 48 pixel shaders. But I assume that wasn't out when you upgraded.

I wouldn't be surprised either way on the AGP version of the R600. It'll come down to demand like you said. Given how hard it has been to find the AGP X1950's in stock, I would expect the demand to be high enough for at least one vendor to offer the 2x00 series in AGP, since it seems to be as simple as adding the bridge chip to their normal product. It might be a lower line like the 2600 instead of the 2900 though. I think there are too many AGP machines (including the one I'm typing on) still out there to abandon that market just yet.

Unfortunately for me, my AGP system is an obsolete Intel socket 478 :(  so there are no decent PCIe boards and the X1950s weren't out in AGP the last time I upgraded (GF7800GS instead). Fortunately for me, my new E6700 ships tomorrow :)  (yay price cuts!!) so that will be it for AGP and me.
April 23, 2007 4:47:03 AM

Quote:
heh going to get the 5600+ X2 ( 2.8ghz ) and get that to 3ghz (FSB 215) its like £40-£50 cheaper and will do 3ghz strate away

CPU GPU limit things
anything below an x1950/7900GTX SLI is GPU bound once the cpu is fast (thats why its point less doing an CPU tests with an Pre 1950)
alot of reviews around when the 6000+ X2 was made thay was useing ATI 1950 cards and was not propper results as the GPU limted the tests (FPS was basicly the same on HighQ ingame settings)

anything above r600/8800 GTS 640mb is basicly CPU bound on DX9 programs/games (unless the video settings is set to insane setting more res and AA then you start to push the card to its limits or dx9 limits)

5600 - good call. Better $/performance than the 6000 (or maybe I should say £/performance for you UK types :)  ).

Oblivion and Supreme Commander are two good examples of GPU bound versus CPU bound - Oblivion can use all that SLI'd 8800GTXs will give it where SupCom can take a quad core to full utilization but doesn't need much in the way of GPU - anything decent will work for that game. I would imagine that SupCom is one of the few currently available games where the OC'd C2D and especially the C2Q would have a clear advantage over the AMD's, most of the GPU-bound games won't care.
a b à CPUs
April 23, 2007 9:25:05 AM

leexgx wrote:
> heh going to get the 5600+ X2 ( 2.8ghz ) and get that to 3ghz (FSB 215) its like
> £40-£50 cheaper and will do 3ghz strate away

True, but then as I say on the day I had an extra 40 to spend anyway because I sold
something to the Top Gun dudes (;D), and with the 6000+ it can be upped to 3.2,
so what the heck? :) 

I don't have a better cooler yet, but right now it's running at 3.15GHz. Early results
suggest the 6000+ is about 2.5X faster than my old dual-XEON P4/2.66 for video
conversion, so I'm very happy on that front (example conversion of an MPEG to
DivX, 4min 38s on the Dell 650, 1min 43s on the 6000+) and is inline with my
expectations based on raw CPU/RAM tests (the XEONs are not that much slower
in peak theoretical core speed, but the RAM is about 70% slower).


> ... CPU GPU limit things
> anything below an x1950/7900GTX SLI is GPU bound once the cpu is fast (thats why
> its point less doing an CPU tests with an Pre 1950)

Yes, that sounds logical.


> alot of reviews around when the 6000+ X2 was made thay was useing ATI 1950 cards
> and was not propper results as the GPU limted the tests (FPS was basicly the same on
> HighQ ingame settings)

Although that's correct from a review testing sense, at leat it was real-world in that many
people were using the card included in the test (or something of similar speed), and of
course such reviews were certainly useful for me. :) 


> anything above r600/8800 GTS 640mb is basicly CPU bound on DX9 programs/games
> (unless the video settings is set to insane setting more res and AA then you start to
> push the card to its limits or dx9 limits)

Yep, that sounds spot on!


> lol nice setup you get there you must mess/use with some intresting stuff[/quote]

I like to think so. ;)  SGI stuff; it's a pretty narrow field, but interesting. I've been
asked by a movie company to setup a JBOD RAID for use with uncompressed
high-def video at 4:4:4:4 quality, using an SGI that's 7 years old (Octane2). It's
kinda fun trying to see how far old systems can go; with two 600MHz CPUs, I've
got it to push 450MB/sec sequential write speed and 511MB/sec sequential read
speed, which is 2X faster than real-time (same setup gives 456MB/sec and
503MB/sec for random write/read). I could push it further (maybe 600MB/sec),
but the extra expansion slot is filled with the gfx option. SGIs are very good for
video capture, editing and playback (all done with acceleration hardware or
benefiting from fast I/O), but they're poor for final video conversion to MPEG4,
DivX, etc. (no multimedia extensions in the CPU design); PCs are much better
for final conversion, so I've been exploring using a combination. SGI's rather
strange MJPEG QuickTime format can thankfully be understood ok using the
Morgan codec under Windows, so conversions work rather nicely with TMPEGEnc
and VirtualDub. If what I'm trying to do works out well, then in the future, assuming
the cost isn't an issue, I probably will buy some latest system of whatever type
that's good for overclocking (ie. designed or video, just boring gfx to begin with),
be it Intel or AMD, perhaps experiment with water cooling just for a laugh. Anyone
know just how far the current E6700 can go?

Ian.

PS. My main site is here:

http://www.futuretech.blinkenlights.nl/sgi.html
a b à CPUs
April 23, 2007 9:49:00 AM

Senor_Bob writes:
> Not that I even remotely suggest you upgrade to it, but Gecube does offer an X1950XT in AGP,
> which is the champion of the present AGP cards, and has the 48 pixel shaders. ...

Hehe, yes, very nice. 8) Pity they don't have it with 512MB, but nicely clocked up
by default.


> ... But I assume that wasn't out when you upgraded.

Yes, I bought the Sapphire the week it became available from typical dealers here.


> I wouldn't be surprised either way on the AGP version of the R600. It'll come down
> to demand like you said. Given how hard it has been to find the AGP X1950's in stock, ...

Funny thing there, I had no problem getting the AGP version, in my case from
www.dabs.com:

http://www.dabs.com/ProductView.aspx?Quicklinx=4CTP

They had plenty available before xmas. The only places that did not have them at all
back then were ordinary high street shops. The price hasn't gone down much, only
11 UKP less now than when I bought mine.


> ... I think there are too many AGP machines (including the one I'm typing on) still out
> there to abandon that market just yet.

That's what I was thinking, a bit like the way Dell has just been forced to start selling
systems with XP again because of customer demand.


> Unfortunately for me, my AGP system is an obsolete Intel socket 478 :(  so there are no

Yikes, I hink I have a system like that, P4/1.8.


> ... Fortunately for me, my new E6700 ships tomorrow :)  (yay price cuts!!) so that will be
> it for AGP and me.

Funky stuff! 8) The E6700 is still way too pricey here thoughfor my liking,
typically around 355 UKP ($710), eg.:

http://www.microdirect.co.uk/productlister.aspx?&n=2,17...

What did it cost you may I ask?

This is what I bought:

http://www.komplett.co.uk/k/ki.asp?sku=331906

Along with:

1 x http://www.microdirect.co.uk/productlister.aspx?s=13&se...
2 x http://www.microdirect.co.uk/productlister.aspx?s=13&se...

and a Cooler Master Centurion 534 case.

Cheers! :) 

Ian.
April 23, 2007 12:17:14 PM

I think one of the problems we encounter with "is this good enough anymore" questions is that we're looking at it from the bleeding edge performance level. Yes, the x2's aren't the top of the line, newest of the new anymore, Nor are the A64's or even the Pentium-D's. However if we step away from the high performance bleeding edge, all of those processors provide more than adequate punch for the daily user and casual gamer. Lets face it, There is little that won't run still on those CPU's at sufficient levels.

at work we run over 100 Machines with Intel Pentium 4 / D's. around 1.8ghz to 2.8ghz. They run everything that we throw at them properly and with more than enough kick. We never hear complaints of crashing, Slowdowns (unless viruses) or the the computers unable to handle our business systems.

So back to the original question at hand. "is the x2 6000 good enough". Yes. Yes it is. it may not be top of the line bleeding edge. But any of my listed CPU's will do you fine.
April 23, 2007 12:52:06 PM

Quote:
Senor_Bob writes:
>
Funny thing there, I had no problem getting the AGP version, in my case from
www.dabs.com:

http://www.dabs.com/ProductView.aspx?Quicklinx=4CTP

They had plenty available before xmas. The only places that did not have them at all
back then were ordinary high street shops. The price hasn't gone down much, only
11 UKP less now than when I bought mine.


> ... I think there are too many AGP machines (including the one I'm typing on) still out
> there to abandon that market just yet.

That's what I was thinking, a bit like the way Dell has just been forced to start selling
systems with XP again because of customer demand.


> Unfortunately for me, my AGP system is an obsolete Intel socket 478 :(  so there are no

Yikes, I hink I have a system like that, P4/1.8.

Funky stuff! 8) The E6700 is still way too pricey here thoughfor my liking,
typically around 355 UKP ($710), eg.:

What did it cost you may I ask?

Ian.

My AGP system isn't quite that bad, it's a P4 3.2 with hyperthreading (northwood core). Still obsolete though.

The Dell/XP analogy - I like that. There are a lot of people resisting being forced into the new thing. Just look how nVidia was forced to offer GF7xxx series in AGP by demand after saying the series would be PCIe only.

I guess the availability of AGP X1950s is different by location. They were always out of stock here in the US until about February. Now they're not too hard to find as long as you aren't tied to a specific brand.

The E6700 was US$340 with a free game (Supreme Commander) that I wanted anyway. Considerably better than 355 pounds/$710 - no way I'd have paid that for it. At least the currency conversion is easy to do right now :D  .
a b à CPUs
April 23, 2007 1:39:28 PM

Senor_Bob writes:
> My AGP system isn't quite that bad, it's a P4 3.2 with hyperthreading (northwood core).
> Still obsolete though.

Not bad at all!

With respect to HT, the strangest thing I found was that for almost every test,
including all the 3DMark tests and most of the sub-tests in PCMark02/05, the
dual-XEON was slower with HT enabled. It only helped if the system had 1 CPU
disabled. I think it's because Windows is too dumb to work out that when
running 2 threads on 2 CPUs with HT, it's better to supply one thread to each
separate CPU, not both to a single CPU. For 2 CPUs, the typical speed
drop was about 5 to 10%, but in some cases it was as high as 30% or more.
Rememeber this is only for a system with more than one CPU though; if I disabled
one of them in the BIOS, then HT did give a signicant speedup for most tasks.

In particular, the PCMark05 System Multithreaded Tests 2 and 3 were faster or a
lot faster with HT off, except for HD virus scan. The worst affected was Test 2
with 'Text Edit' and 'Image Decompression', for which having HT enabled slowed
the results down by 43% and 37% respectively.


> The Dell/XP analogy - I like that. There are a lot of people resisting being forced into
> the new thing. Just look how nVidia was forced to offer GF7xxx series in AGP by
> demand after saying the series would be PCIe only.

I'd certainly be surprised if there wasn't an AGP version of the next ATI offerings,
even if in some reduced form.


> The E6700 was US$340 with a free game (Supreme Commander) that I wanted
> anyway. ...

Wow! That's good!!


> At least the currency conversion is easy to do right now :D  .

;D

Cheers! :) 

Ian.
April 23, 2007 2:33:03 PM

mapesdhs,

you're not the only one who noticed that. my SQL server i use is a Dual Xeon (2.4ghz) with Hyper Threading.

i have no actual benchmarks to back this up. But with Hypergthreading disabled, I've noticed a much faster response time in Apache and SQL reports that are loading data from that server. The server runs Server 2003.
a b à CPUs
April 23, 2007 3:41:04 PM

mpasternak writes:
> you're not the only one who noticed that. my SQL server i use is a Dual Xeon (2.4ghz)
> with Hyper Threading.
>
> i have no actual benchmarks to back this up. But with Hypergthreading disabled, I've
> noticed a much faster response time in Apache and SQL reports that are loading data
> from that server. The server runs Server 2003.

Hmm, well there ya go. The other thing I read was that for two threads running HT
on a single chip, both are using the same L2 cache pool and this can cause nasty
cache contention issues for some codes. I'm just suprised this issue isn't mentioned
in any review, etc., though perhaps for most people it doesn't matter since the
majority of consumer systems would be single-CPU (for which HT does help).

Full details of the tests I ran are here. The 3DMark
CPU scores and PCMark05 threaded System tests show the effect most of all. More
recently I did some real-world tests (video encoding) and got the same result, infact
particularly bad for running two conversions at once: with HT on, the overall
throughput was 17% slower (converting 800 MJPEG frames to DivX AVI with
TMPEGEnc).

Ian.
April 23, 2007 3:47:52 PM

sounds to me that HT has an issue with the timing. I know it allows a 2nd thread to be inserted into the CPU pipe while another one is already in it.

But sounds like for certain cases like databases and other SMP enabled Software it tries to spread the load like it was 2 CPU's. Causing the system to slow down as it tries to put through 2 threads at once, rather than simply the 1, plus the added thread while there's free resource in the CPU.

Have you tried comparing the performance of say 3dmark01 on both? something that does not support Multiple CPU? I wonder if the move to SMP processing has rendered HT useless (and causing problems)

or i could know nothing about what i'm talking about!

Either way, Give me a X2 6000 and it'll give me more than enough kick for anything today.
a b à CPUs
April 23, 2007 6:07:49 PM

mpasternak writes:
> sounds to me that HT has an issue with the timing. I know it allows a 2nd thread to
> be inserted into the CPU pipe while another one is already in it.

And therein lies the problem. For a system with 2 CPUs, the OS ought to allocate
a 2nd thread to the other CPU first, not a CPU that's already running something.
HT should only be employed for 3 or more threads. Recall the way Windows just
'sees' 4 cores; I don't think it can tell them apart. Nice tech, Intel; bad OS to
exploit it, MS. :\


> But sounds like for certain cases like databases and other SMP enabled Software it tries
> to spread the load like it was 2 CPU's. ...

If you mean treating the two HT virtual cores like two physically separate CPUs, yes,
I think that's exactly what it's doing.


> Have you tried comparing the performance of say 3dmark01 on both? something that
> does not support Multiple CPU? ...

For 3DM01 there was a small speed increase with HT turned off, between 2 and 5%,
though Game 3/High showed quite a jump (9%). Note that I didn't add the data to the
Dell forum page as I never got round to running the tests with just 1 CPU comparing
HT on vs. off (quickly realised that the newer tests would be more useful). Here are
the numbers, with the 6000+ results included for comparison (remember this is with
the same gfx card):

[code:1:36c6a4f7fb]
XEON, P4/2.66GHz |
2 CPUs 2 CPUs | 6000+
HT ON HT OFF | 3.12 GHz
---------------------------------------------
3DMark2001: 18233 18665 | 36087
Game 1, Low: 217.6 228.7 | 457.2
Game 1, High: 65.3 69.8 | 143.4
Game 2, Low: 293.7 299.7 | 674.2
Game 2, High: 162.9 165.3 | 347.6
Game 3, Low: 194.3 202.6 | 406.0
Game 3, High: 85.6 86.2 | 186.8
Game 4: 245.1 246.5 | 357.9
[/code:1:36c6a4f7fb]

Note that the numbers for the Feature tests also shot up with the 6000+,
especially for Dot3 (324.9 -> 808.8), Vertex Shader (199.2 -> 432.3) and
Pixel Shader (207.8 -> 551.3).


> ... I wonder if the move to SMP processing has rendered
> HT useless (and causing problems)

Very likely, at least with Windows anyway. Quite possible that some Linux/BSD distros
might handle it better, I don't know.

Btw, Cinebench seems to confirm the above. For the N-CPU render test using
two CPUs and two threads, the Dell is 28% faster with HT turned off, suggesting
that indeed with HT on the system is probably pushing both threads through just
one CPU. By contrast, and just as one would expect, when rendering with 4
threads it is 16% faster with HT turned on. Funky eh? :) 


> Either way, Give me a X2 6000 and it'll give me more than enough kick for anything today.

Yep! As the numbers show, I'd need a faster gfx card in order to benefit meaningfully
from a faster CPU (for those games I care about, a nod to the critics... ;) , but with the
X1950 overclocked reliably to 641/783, I'm more than happy for the moment.

Cheers! :) 

Ian.
September 5, 2007 4:29:31 PM

I got the 6000 to 3.3 @ 1.55 vcore 220bus x15 multiplier....and my water cooling isnt anything to write home about...and its rock solid 48 hours testing
September 7, 2007 1:23:03 AM

shabodah said:
I'm not disagreeing with that, but being that the 6700 is there, and there are many reviews out there showing comparisons between the 6600, 6700, and 6800, I don't find it too critical of an issue.
They are close in price and performance, I sure as hell wouldnt buy a new mobo just to have core2, i updated my bios to except phenom processors(i think), and i will wait to see performance, I'm a patient man, not like those people who rused out and purchased the iphone.
September 7, 2007 1:26:26 AM

dasickninja said:
Quote:
Good grief, the gaming benchmarks were crap. WhoTF buys a top of the line processor and runs games at 1024x768?? Talk about spinning the gaming benchmarks in favor of Intel so they can say Intel dominates across the board...

Just checking in for a bit... now on to my comment.

You run games at low resolutions when testing CPU's so that all the strain is shifted from the GPU to the CPU. That way, there is a smaller margin of error and you know that you are testing the effect of the CPUs performance and not the GPU.

Hey, welcome back ninja
September 7, 2007 1:38:44 AM

BaronMatrix said:
Quote:
The 6000+ is in large extent just a place holding product, something that is there for the sake of being, just like the FX-70,72 and 74. It's neither cheap, nor performing.


Boy you guys are a broken record. Non performng means you can't use it for games, not it loses to Core 2.

I spent $496 for a 4400+ so it is relatively cheap. I just want to see your faces if you have to put Core 2 down below Kuma.
Wow, ninja and baron matrix, the boys are all back !
September 7, 2007 1:50:32 AM

dasickninja said:
Quote:
Intel C2D Means nothing to me and neither do any of you Intel fanboys. Framerates and visual speed between mine and a 6600 or 6400 whatever wouldnt be noticeable. I will have quad core next and a 8900 gtx and pay cash for it. So you can post all your little comments you want, I will still get all the female *** i want drive my 2007 loaded Z06 vette and be happy with my choice of system and i will still have 3 wars under my belt and 19 years of service to this country. Dont come to me with your little kiddy comments about my processor owns yours Bs, get a overall well balanced system with a real resolution to play it on,do something with your life other than being a minor step ahead in processor choice, poor asses. Now go back out and play before mama comes calling you for supper...LOL

How does one go from the subject of computers to cars to female *** all in the same post... could it be.. Piddy?

Thats a good question, although, i love female ***'s(you here that halle bary), and corvettes, sounded a little desperate, but hey whatever
September 7, 2007 2:29:59 AM

umm...this threads 4-5months old those posts by ninja are wayyy old now lets let this thread rest in peace.
September 7, 2007 4:15:24 PM

Kurz said:
Max to get out've the processor is probably 3.2
Dont expect the highest tiered processor to overclock much.

Amd processors cant run so fast compared to intel's offering.


my 6000+ can oc to 3551 with super pi passed,and at 373x max.
however,it cannot beat even 6320@3200
September 14, 2007 10:52:21 PM

Just a quick comment after reading all of the above. My x2 5600+ is benchmarking a tad above the E6600 and just about breaking even with
the E6700. It can be had with mobo for $150 to $200, depending on the mobo.
Even multiplier and lower voltage make it a better buy than the 6000+.
September 15, 2007 12:18:51 AM

m25 said:
The 6000+ is in large extent just a place holding product, something that is there for the sake of being, just like the FX-70,72 and 74. It's neither cheap, nor performing.

Ok, now that comparison is a bit harsh... the FX-70, FX-72, and FX-74 were not practical at all... a 6000+ isn't nearly in that league... those FXs are evil... the 6000+ is just quasi-evil.
!