Does a high FSB really matter??

G

Guest

Guest
Archived from groups: alt.comp.periphs.mainboard.asus (More info?)

Hi Folks,
There's a thread going in the Abit group asking about the performance
gains of overclocking an NF7. I'm starting a new thread along these
lines as it may be of interest generally and as I used my fandablious
Asus A7V600, I thought I might as well post a copy here as well!

I'm basically stating that IMHO, one shouldn't get too worked up about
having super fast FSB speeds and pushing your RAM to the limit (and
possibly forking out loads of dosh on ultra high speed stuff that you
might not really need). THE most important speed in a given system is
the actual internal CPU clock speed. (I obviously appreciate that in
some scenarios, the two are linked)
As a very rough and ready test, (I didn't have too much time to spend on
this when I did it) I wanted to use a real life application that could
use all the bags of processing power it could get. I chose 'Reaktor' as
I was working on it at the time. (For those that don't know, its an
audio synthesiser package that allows you to build your own virtual
instruments that work in real time).
As I had an unlocked Barton, it was easy to alter the multiplier and
FSB.
Reaktor has a '%CPU utilisation' meter. A good real life indicator of
what's going on under the hood. Try too complex a patch, get up to 100%
and you can't go any further!
Take a look a the figures below. A summary of the findings is this
however:
As you increase the CPU clock, the %CPU utilisation goes down as you
would expect. It varies quite a lot, an 800MHz original athlon is
peaking out at 100%. Switch to the Barton at 1.46GHz and its gone down
to 32%.
Now change the Barton speed. At 1.8GHz its only using 25% CPUU and at
2.31GHz, its only 19%. This is at a DDR420MHz FSB speed.
Useful differences.
Now if you switch to 2.3GHz, but use only a DDR206 FSB (under half), and
crank the multiplier right up, you only lose 1%
Bearing in mind that's one extreme to the other, can you see my point??

I also thought it would be interesting to have a quick look at how games
would be affected, so I ran 3Dmark2001 on these two extremes.
I was using a Ti4200.
Remember that I'm not really interested in synthetic benchmarks here, I
want real life speed improvements. Lots of the tests do rely on pure
data throughput. The only meaningful numbers are the FPS readings. Game
1 (dragothic?) does shift a fair amount data. Nature is heavily GPU
based.
Again, bear in mind that I'm only using the extreme FSB speeds.
How much difference would there be using say a gig of normal £130 PC3200
compared to £340 of PC4400 ?? a few percent ??? Are you really going to
notice this outside the benchmark sheet??
Remember, I DO realise sometimes you have to get high FSB speeds to push
the CPU speed up. But sometimes you don't ;-)

Hope this was interesting.


*************************************************************************
Quick test to see the effect of raw CPU megahertz compared to
varying the RAM/FSB speed.
The %CPU score is the processor utilisation running a standard (complex)
patch in Reaktor.

My Asus A7V600 with unlocked Barton XP2500+

Actual CPU spd GHz FSB Mult RAM %CPU
2.31 420 11 210 19
2.32 333 14 166 19
2.30 256 18 128 20
2.30 206 22.5 103 20

1.8 400 9 200 25
1.83 333 11 166 25
1.8 266 13.5 133 26

1.46 266 11 133 32

Interestingly, on My Athlon slotA 800MHz
0.8 200 100 approx 100%


I also compared:
result 1 (flat out FSB)
result 4 (really throttled back slow RAM)
With some 3Dmark2001 benchmarks:
3DMark Game1 low Game1 high Nature
1 9811 158.3 60.5 41.1
4 11388 179.4 77.8 42.5
--
__________________________________________________
Personal email for Gareth Jones can be sent to:
'usenet4gareth' followed by an at symbol
followed by 'uk2' followed by a dot
followed by 'net'
__________________________________________________
 

Paul

Splendid
Mar 30, 2004
5,267
0
25,780
Archived from groups: alt.comp.periphs.mainboard.asus (More info?)

In article <laqaTfFrnktAFw26@nospam.demon.co.uk>, Gareth Jones
<usenet@nospam.demon.co.uk> wrote:

> Hi Folks,
> There's a thread going in the Abit group asking about the performance
> gains of overclocking an NF7. I'm starting a new thread along these
> lines as it may be of interest generally and as I used my fandablious
> Asus A7V600, I thought I might as well post a copy here as well!
>
> I'm basically stating that IMHO, one shouldn't get too worked up about
> having super fast FSB speeds and pushing your RAM to the limit (and
> possibly forking out loads of dosh on ultra high speed stuff that you
> might not really need). THE most important speed in a given system is
> the actual internal CPU clock speed. (I obviously appreciate that in
> some scenarios, the two are linked)
> As a very rough and ready test, (I didn't have too much time to spend on
> this when I did it) I wanted to use a real life application that could
> use all the bags of processing power it could get. I chose 'Reaktor' as
> I was working on it at the time. (For those that don't know, its an
> audio synthesiser package that allows you to build your own virtual
> instruments that work in real time).
> As I had an unlocked Barton, it was easy to alter the multiplier and
> FSB.
> Reaktor has a '%CPU utilisation' meter. A good real life indicator of
> what's going on under the hood. Try too complex a patch, get up to 100%
> and you can't go any further!
> Take a look a the figures below. A summary of the findings is this
> however:
> As you increase the CPU clock, the %CPU utilisation goes down as you
> would expect. It varies quite a lot, an 800MHz original athlon is
> peaking out at 100%. Switch to the Barton at 1.46GHz and its gone down
> to 32%.
> Now change the Barton speed. At 1.8GHz its only using 25% CPUU and at
> 2.31GHz, its only 19%. This is at a DDR420MHz FSB speed.
> Useful differences.
> Now if you switch to 2.3GHz, but use only a DDR206 FSB (under half), and
> crank the multiplier right up, you only lose 1%
> Bearing in mind that's one extreme to the other, can you see my point??
>
> I also thought it would be interesting to have a quick look at how games
> would be affected, so I ran 3Dmark2001 on these two extremes.
> I was using a Ti4200.
> Remember that I'm not really interested in synthetic benchmarks here, I
> want real life speed improvements. Lots of the tests do rely on pure
> data throughput. The only meaningful numbers are the FPS readings. Game
> 1 (dragothic?) does shift a fair amount data. Nature is heavily GPU
> based.
> Again, bear in mind that I'm only using the extreme FSB speeds.
> How much difference would there be using say a gig of normal £130 PC3200
> compared to £340 of PC4400 ?? a few percent ??? Are you really going to
> notice this outside the benchmark sheet??
> Remember, I DO realise sometimes you have to get high FSB speeds to push
> the CPU speed up. But sometimes you don't ;-)
>
> Hope this was interesting.
>
>
> *************************************************************************
> Quick test to see the effect of raw CPU megahertz compared to
> varying the RAM/FSB speed.
> The %CPU score is the processor utilisation running a standard (complex)
> patch in Reaktor.
>
> My Asus A7V600 with unlocked Barton XP2500+
>
> Actual CPU spd GHz FSB Mult RAM %CPU
> 2.31 420 11 210 19
> 2.32 333 14 166 19
> 2.30 256 18 128 20
> 2.30 206 22.5 103 20
>
> 1.8 400 9 200 25
> 1.83 333 11 166 25
> 1.8 266 13.5 133 26
>
> 1.46 266 11 133 32
>
> Interestingly, on My Athlon slotA 800MHz
> 0.8 200 100 approx 100%
>
>
> I also compared:
> result 1 (flat out FSB)
> result 4 (really throttled back slow RAM)
> With some 3Dmark2001 benchmarks:
> 3DMark Game1 low Game1 high Nature
> 1 9811 158.3 60.5 41.1
> 4 11388 179.4 77.8 42.5

I would say to a first order approximation, that in a well
designed processor, with large cache, that CPU core frequency
is all that matters. Most applications don't have pathological
memory subsystem behavior (business apps, web surfing, emailing).

There are some classes of problems that are "cache busters".
They have poor locality of reference (i.e. large memory footprint
and visit memory addresses in a seemingly random way). One example
of this class of problem, is simulation of chip designs. Another
is this fluid flow problem that Harlan Stockman posted about
here not too long ago.

I would say repeat your tests with this benchmark:

http://users.viawest.net/~hwstock/bench/3d0/3d0.zip

Instructions and some background info are here:

http://www.abxzone.com/forums/showthread.php?t=70142

The benchmark rates your system in units of MUPS.
Try, for example, just bumping the multiplier a step at
a time, and I bet the MUP rating doesn't move an inch.

This is an example of a real application (fluid flow simulator
for a fluid flowing through some kind of particulate) and
the author of that benchmark writes software for it for
a living.

Post back your results :)

My 1.8GHz P4 with SDRAM only got 2.73 MUPS and you should be
able to beat that easily.

Paul
 
G

Guest

Guest
Archived from groups: alt.comp.periphs.mainboard.asus (More info?)

>based.
>Again, bear in mind that I'm only using the extreme FSB speeds.
>How much difference would there be using say a gig of normal £130 PC3200
>compared to £340 of PC4400 ?? a few percent ??? Are you really going to
>notice this outside the benchmark sheet??
>Remember, I DO realise sometimes you have to get high FSB speeds to push
>the CPU speed up. But sometimes you don't ;-)
>
>Hope this was interesting.
>
The rule of thumb is to not have the CPU much more than 5X the FSB. If
you always ignore that then you will run into trouble, otherwise not
so much.
 

Nero

Distinguished
Oct 19, 2003
233
0
18,680
Archived from groups: alt.comp.periphs.mainboard.asus (More info?)

Que ???????????????????????????

"Gareth Jones" <usenet@nospam.demon.co.uk> wrote in message
news:laqaTfFrnktAFw26@nospam.demon.co.uk...
> Hi Folks,
> There's a thread going in the Abit group asking about the performance
> gains of overclocking an NF7. I'm starting a new thread along these
> lines as it may be of interest generally and as I used my fandablious
> Asus A7V600, I thought I might as well post a copy here as well!
>
> I'm basically stating that IMHO, one shouldn't get too worked up about
> having super fast FSB speeds and pushing your RAM to the limit (and
> possibly forking out loads of dosh on ultra high speed stuff that you
> might not really need). THE most important speed in a given system is
> the actual internal CPU clock speed. (I obviously appreciate that in
> some scenarios, the two are linked)
> As a very rough and ready test, (I didn't have too much time to spend on
> this when I did it) I wanted to use a real life application that could
> use all the bags of processing power it could get. I chose 'Reaktor' as
> I was working on it at the time. (For those that don't know, its an
> audio synthesiser package that allows you to build your own virtual
> instruments that work in real time).
> As I had an unlocked Barton, it was easy to alter the multiplier and
> FSB.
> Reaktor has a '%CPU utilisation' meter. A good real life indicator of
> what's going on under the hood. Try too complex a patch, get up to 100%
> and you can't go any further!
> Take a look a the figures below. A summary of the findings is this
> however:
> As you increase the CPU clock, the %CPU utilisation goes down as you
> would expect. It varies quite a lot, an 800MHz original athlon is
> peaking out at 100%. Switch to the Barton at 1.46GHz and its gone down
> to 32%.
> Now change the Barton speed. At 1.8GHz its only using 25% CPUU and at
> 2.31GHz, its only 19%. This is at a DDR420MHz FSB speed.
> Useful differences.
> Now if you switch to 2.3GHz, but use only a DDR206 FSB (under half), and
> crank the multiplier right up, you only lose 1%
> Bearing in mind that's one extreme to the other, can you see my point??
>
> I also thought it would be interesting to have a quick look at how games
> would be affected, so I ran 3Dmark2001 on these two extremes.
> I was using a Ti4200.
> Remember that I'm not really interested in synthetic benchmarks here, I
> want real life speed improvements. Lots of the tests do rely on pure
> data throughput. The only meaningful numbers are the FPS readings. Game
> 1 (dragothic?) does shift a fair amount data. Nature is heavily GPU
> based.
> Again, bear in mind that I'm only using the extreme FSB speeds.
> How much difference would there be using say a gig of normal £130 PC3200
> compared to £340 of PC4400 ?? a few percent ??? Are you really going to
> notice this outside the benchmark sheet??
> Remember, I DO realise sometimes you have to get high FSB speeds to push
> the CPU speed up. But sometimes you don't ;-)
>
> Hope this was interesting.
>
>
> *************************************************************************
> Quick test to see the effect of raw CPU megahertz compared to
> varying the RAM/FSB speed.
> The %CPU score is the processor utilisation running a standard (complex)
> patch in Reaktor.
>
> My Asus A7V600 with unlocked Barton XP2500+
>
> Actual CPU spd GHz FSB Mult RAM %CPU
> 2.31 420 11 210 19
> 2.32 333 14 166 19
> 2.30 256 18 128 20
> 2.30 206 22.5 103 20
>
> 1.8 400 9 200 25
> 1.83 333 11 166 25
> 1.8 266 13.5 133 26
>
> 1.46 266 11 133 32
>
> Interestingly, on My Athlon slotA 800MHz
> 0.8 200 100 approx 100%
>
>
> I also compared:
> result 1 (flat out FSB)
> result 4 (really throttled back slow RAM)
> With some 3Dmark2001 benchmarks:
> 3DMark Game1 low Game1 high Nature
> 1 9811 158.3 60.5 41.1
> 4 11388 179.4 77.8 42.5
> --
> __________________________________________________
> Personal email for Gareth Jones can be sent to:
> 'usenet4gareth' followed by an at symbol
> followed by 'uk2' followed by a dot
> followed by 'net'
> __________________________________________________
 

Nero

Distinguished
Oct 19, 2003
233
0
18,680
Archived from groups: alt.comp.periphs.mainboard.asus (More info?)

Que??????????????

"Gareth Jones" <usenet@nospam.demon.co.uk> wrote in message
news:laqaTfFrnktAFw26@nospam.demon.co.uk...
> Hi Folks,
> There's a thread going in the Abit group asking about the performance
> gains of overclocking an NF7. I'm starting a new thread along these
> lines as it may be of interest generally and as I used my fandablious
> Asus A7V600, I thought I might as well post a copy here as well!
>
> I'm basically stating that IMHO, one shouldn't get too worked up about
> having super fast FSB speeds and pushing your RAM to the limit (and
> possibly forking out loads of dosh on ultra high speed stuff that you
> might not really need). THE most important speed in a given system is
> the actual internal CPU clock speed. (I obviously appreciate that in
> some scenarios, the two are linked)
> As a very rough and ready test, (I didn't have too much time to spend on
> this when I did it) I wanted to use a real life application that could
> use all the bags of processing power it could get. I chose 'Reaktor' as
> I was working on it at the time. (For those that don't know, its an
> audio synthesiser package that allows you to build your own virtual
> instruments that work in real time).
> As I had an unlocked Barton, it was easy to alter the multiplier and
> FSB.
> Reaktor has a '%CPU utilisation' meter. A good real life indicator of
> what's going on under the hood. Try too complex a patch, get up to 100%
> and you can't go any further!
> Take a look a the figures below. A summary of the findings is this
> however:
> As you increase the CPU clock, the %CPU utilisation goes down as you
> would expect. It varies quite a lot, an 800MHz original athlon is
> peaking out at 100%. Switch to the Barton at 1.46GHz and its gone down
> to 32%.
> Now change the Barton speed. At 1.8GHz its only using 25% CPUU and at
> 2.31GHz, its only 19%. This is at a DDR420MHz FSB speed.
> Useful differences.
> Now if you switch to 2.3GHz, but use only a DDR206 FSB (under half), and
> crank the multiplier right up, you only lose 1%
> Bearing in mind that's one extreme to the other, can you see my point??
>
> I also thought it would be interesting to have a quick look at how games
> would be affected, so I ran 3Dmark2001 on these two extremes.
> I was using a Ti4200.
> Remember that I'm not really interested in synthetic benchmarks here, I
> want real life speed improvements. Lots of the tests do rely on pure
> data throughput. The only meaningful numbers are the FPS readings. Game
> 1 (dragothic?) does shift a fair amount data. Nature is heavily GPU
> based.
> Again, bear in mind that I'm only using the extreme FSB speeds.
> How much difference would there be using say a gig of normal £130 PC3200
> compared to £340 of PC4400 ?? a few percent ??? Are you really going to
> notice this outside the benchmark sheet??
> Remember, I DO realise sometimes you have to get high FSB speeds to push
> the CPU speed up. But sometimes you don't ;-)
>
> Hope this was interesting.
>
>
> *************************************************************************
> Quick test to see the effect of raw CPU megahertz compared to
> varying the RAM/FSB speed.
> The %CPU score is the processor utilisation running a standard (complex)
> patch in Reaktor.
>
> My Asus A7V600 with unlocked Barton XP2500+
>
> Actual CPU spd GHz FSB Mult RAM %CPU
> 2.31 420 11 210 19
> 2.32 333 14 166 19
> 2.30 256 18 128 20
> 2.30 206 22.5 103 20
>
> 1.8 400 9 200 25
> 1.83 333 11 166 25
> 1.8 266 13.5 133 26
>
> 1.46 266 11 133 32
>
> Interestingly, on My Athlon slotA 800MHz
> 0.8 200 100 approx 100%
>
>
> I also compared:
> result 1 (flat out FSB)
> result 4 (really throttled back slow RAM)
> With some 3Dmark2001 benchmarks:
> 3DMark Game1 low Game1 high Nature
> 1 9811 158.3 60.5 41.1
> 4 11388 179.4 77.8 42.5
> --
> __________________________________________________
> Personal email for Gareth Jones can be sent to:
> 'usenet4gareth' followed by an at symbol
> followed by 'uk2' followed by a dot
> followed by 'net'
> __________________________________________________
 
G

Guest

Guest
Archived from groups: alt.comp.periphs.mainboard.asus (More info?)

=|[ Gareth Jones's ]|= wrote:

> I'm basically stating that IMHO, one shouldn't get too worked up about
> having super fast FSB speeds and pushing your RAM to the limit...

Nice post and thanks for posting data.
I think a major thing to be optimised is the swapfile - especialy with
large memory, but I dont like to put fsb out of the picture.

Higher fsb, should directly help processor context switches, between
threads, lowering the Operating Systems overhead for multitasking and
keeping the machine smooth with lots of threads and processes open.
Because at some level of work, the OS's control of the processor will start
to bite against the memories ability to feed the processors cache quick
enough and get enough work done before another thread needs to get in.
Perhaps this might* be noticable with a sysmark benchmark? or if you often
use virtual devices like VMware and Ramdrives, higher fsb should correspond
to less glitches and out-of-time critical threads.

It wont effect a single process that is cache sympathetic, but when theres
enough processes running simutaneously, fsb could become more significant
(on heavily loaded machine) -if thats how your machine gets used.

No data here though :/

High fsb makes some kinds of processing feasible which might become more
promenent in the future, processes with large random memory demands eg.
physical simulations, neural network trainers, speech and scene recognition
could crave fsb speed.

Not so many applications where fsb is critical right now, but it could be
important for future proofing and future coding applications, which we have
the rigs to investigate right now ;)

Im also rather bemused my theoretical outlook is not bore out by much data,
the engineers must have done a real good job of maximising cache
efficiency, so much its almost like we should consider the 'main memory' to
be the processors L1 cache, think of the onboard memory as swapfile space
and the hard drive as practicaly useless for memory management :)

Best regards,
--
' android
 
G

Guest

Guest
Archived from groups: alt.comp.periphs.mainboard.asus (More info?)

It would be my 2cents that the tests performed will directly impact
the "differential" readings. Programs that are more memory intensive
will benefit more from a higher memory frequency. Programs with a
smaller memory usage requirement will be more cpu frequency
responsive. What is a good test? Dunno, but maybe for memory usage
testing, try converting some video files.

--
Best regards,
Kyle
"Gareth Jones" <usenet@nospam.demon.co.uk> wrote in message
news:laqaTfFrnktAFw26@nospam.demon.co.uk...
| Hi Folks,
| There's a thread going in the Abit group asking about the
performance
| gains of overclocking an NF7. I'm starting a new thread along these
| lines as it may be of interest generally and as I used my
fandablious
| Asus A7V600, I thought I might as well post a copy here as well!
|
 
G

Guest

Guest
Archived from groups: alt.comp.periphs.mainboard.asus (More info?)

Gareth Jones wrote:
> Hi Folks,
> There's a thread going in the Abit group asking about the performance
> gains of overclocking an NF7. I'm starting a new thread along these
> lines as it may be of interest generally and as I used my fandablious
> Asus A7V600, I thought I might as well post a copy here as well!
>
> I'm basically stating that IMHO, one shouldn't get too worked up about
> having super fast FSB speeds and pushing your RAM to the limit (and
> possibly forking out loads of dosh on ultra high speed stuff that you
> might not really need). THE most important speed in a given system is
> the actual internal CPU clock speed. (I obviously appreciate that in
> some scenarios, the two are linked)
> As a very rough and ready test, (I didn't have too much time to spend on
> this when I did it) I wanted to use a real life application that could
> use all the bags of processing power it could get. I chose 'Reaktor' as
> I was working on it at the time. (For those that don't know, its an
> audio synthesiser package that allows you to build your own virtual
> instruments that work in real time).
> As I had an unlocked Barton, it was easy to alter the multiplier and
> FSB.
> Reaktor has a '%CPU utilisation' meter. A good real life indicator of
> what's going on under the hood. Try too complex a patch, get up to 100%
> and you can't go any further!
> Take a look a the figures below. A summary of the findings is this
> however:
> As you increase the CPU clock, the %CPU utilisation goes down as you
> would expect. It varies quite a lot, an 800MHz original athlon is
> peaking out at 100%. Switch to the Barton at 1.46GHz and its gone down
> to 32%.
> Now change the Barton speed. At 1.8GHz its only using 25% CPUU and at
> 2.31GHz, its only 19%. This is at a DDR420MHz FSB speed.
> Useful differences.
> Now if you switch to 2.3GHz, but use only a DDR206 FSB (under half), and
> crank the multiplier right up, you only lose 1%
> Bearing in mind that's one extreme to the other, can you see my point??

Not really. You are specifically testing CPU utilisation and wonder why
memory bandwidth does not affect it.

Try upgrading your video card - you think that will affect it too? What
about your hard drive? Monitor size?

Your conclusion is that CPU internal frequency is the only important factor
in crunching numbers - well done.

> I also thought it would be interesting to have a quick look at how games
> would be affected, so I ran 3Dmark2001 on these two extremes.
> I was using a Ti4200.
> Remember that I'm not really interested in synthetic benchmarks here, I
> want real life speed improvements. Lots of the tests do rely on pure
> data throughput. The only meaningful numbers are the FPS readings. Game
> 1 (dragothic?) does shift a fair amount data. Nature is heavily GPU
> based.
> Again, bear in mind that I'm only using the extreme FSB speeds.
> How much difference would there be using say a gig of normal £130 PC3200
> compared to £340 of PC4400 ?? a few percent ??? Are you really going to
> notice this outside the benchmark sheet??

Unless you run the memory in synch with the FSB you will find that your FSB
becomes saturated and is then the weakest link in the chain. You can't very
well transfer data faster than "210MHz" if the FSB is run at 210MHz, even if
the memory bus is run at 1GHz. I didn't see where you stated whether memory
was being run in synch with FSB or not.

> Remember, I DO realise sometimes you have to get high FSB speeds to push
> the CPU speed up. But sometimes you don't ;-)
>
> Hope this was interesting.

Not really. Your understanding of computer architecture has resulted in a
fair amount of wasted time.

Obviously if you are not using tasks that require huge amounts of memory
bandwidth, then huge amounts of memory bandwidth won't help.

> *************************************************************************
> Quick test to see the effect of raw CPU megahertz compared to
> varying the RAM/FSB speed.
> The %CPU score is the processor utilisation running a standard (complex)
> patch in Reaktor.
>
> My Asus A7V600 with unlocked Barton XP2500+
>
> Actual CPU spd GHz FSB Mult RAM %CPU
> 2.31 420 11 210 19
> 2.32 333 14 166 19
> 2.30 256 18 128 20
> 2.30 206 22.5 103 20
>
> 1.8 400 9 200 25
> 1.83 333 11 166 25
> 1.8 266 13.5 133 26
>
> 1.46 266 11 133 32
>
> Interestingly, on My Athlon slotA 800MHz
> 0.8 200 100 approx 100%
>
>
> I also compared:
> result 1 (flat out FSB)
> result 4 (really throttled back slow RAM)
> With some 3Dmark2001 benchmarks:
> 3DMark Game1 low Game1 high Nature
> 1 9811 158.3 60.5 41.1
> 4 11388 179.4 77.8 42.5


Here you have a difference of up to nearly 30% on a test that is designed
primarily to stress your video subsystem. Not bad going, really. It does
highlight the fact that in some situations memory bandwidth IS important. I
could write you a synthetic benchmark that would be almost completely
independent of CPU speed and entirely dependant on memory bandwidth. Or
vice versa. You said you are not interested in synthetic benchmarks - thats
fine, but unless you have an understanding of WHY a test varies with memory
bandwidth or not, then there's little point in doing the testing at all -
you certainly won't be able to draw sensible conclusions.

Ben
--
A7N8X FAQ: www.ben.pope.name/a7n8x_faq.html
Questions by email will likely be ignored, please use the newsgroups.
I'm not just a number. To many, I'm known as a String...
 
G

Guest

Guest
Archived from groups: alt.comp.periphs.mainboard.asus (More info?)

In message <2hplvfFfmjhaU1@uni-berlin.de>, Ben Pope <spam@hotmail.com>
writes
>> Bearing in mind that's one extreme to the other, can you see my point??
>
>Not really. You are specifically testing CPU utilisation and wonder why
>memory bandwidth does not affect it.

That is not correct. I was testing a complete application complete with
input and output, not just something sitting in the CPU cache.


>
>Try upgrading your video card - you think that will affect it too? What
>about your hard drive? Monitor size?
>
>Your conclusion is that CPU internal frequency is the only important factor
>in crunching numbers - well done.

Don't be a patronising smart arse. I was demonstrating that for some
real life applications, for most people, worrying about squeezing the
last megahertz out of the FSB isn't worth it if it involves an excessive
amount of time or money.

>>
>> Hope this was interesting.
>
>Not really. Your understanding of computer architecture has resulted in a
>fair amount of wasted time.
>

I don't think so. Taking your attitude would mean that nobody would
bother doing any benchmarking speed tests on real life apps at all!

>
>> 3DMark Game1 low Game1 high Nature
>> 1 9811 158.3 60.5 41.1
>> 4 11388 179.4 77.8 42.5
>
>
>Here you have a difference of up to nearly 30% on a test that is designed
>primarily to stress your video subsystem. Not bad going, really. It does
>highlight the fact that in some situations memory bandwidth IS important. I
>could write you a synthetic benchmark that would be almost completely
>independent of CPU speed and entirely dependant on memory bandwidth. Or
>vice versa. You said you are not interested in synthetic benchmarks - thats
>fine, but unless you have an understanding of WHY a test varies with memory
>bandwidth or not, then there's little point in doing the testing at all -

Do I sense that superior patronising tone once again?
I fully realise that in SOME situations bandwidth is important, and I
too could also write a synthetic benchmark to demonstrate speed
differences. You're missing my point.

As for the 3D mark figures, yes, the greatest difference is around 28%,
but without checking, I'd take a guess that its because its got huge
texture maps and my graphics card only has something like 64MB of
memory.
The low detail one is only about 13% off, Nature is only around 3%
slower and ..And its a big and..... these figures are at the EXTREME
ends of the scale. Someone who's trying squeeze their system to by
pushing DDR440 speeds (and sometimes spending loads of cash by swapping
normal PC3200 RAM to something more esoteric) could comfortably use say
400-420 and just up the multiplier without noticing much (any?)
difference in real life.

I'll give you a different real life example which many more people will
be familiar with. Take a 7.5GB DVD image and re-compress it to 4.3GB.

Once again, I've taken a fairly large difference in FSB speed (although
not as large as the 3Dmarks extremes). The final clock speed is 2GHz, I
can achieve it with:
DDR400 with a multiplier of 10
DDR266 with a multiplier of 15
That's a 50% increase in FSB speed. What's the difference in compression
time??
2.75%
Yup, takes about 15 min, and the much faster RAM does it around 20 sec
faster. Wow! ... NOT!!
You can bet that if I only had PC2700 RAM in my machine I wouldn't be
losing any sleep wondering if I should upgrade to PC3200 let alone
anything faster.


--
__________________________________________________
Personal email for Gareth Jones can be sent to:
'usenet4gareth' followed by an at symbol
followed by 'uk2' followed by a dot
followed by 'net'
__________________________________________________
 
G

Guest

Guest
Archived from groups: alt.comp.periphs.mainboard.asus (More info?)

In message <nospam-2705041638110001@192.168.1.177>, Paul
<nospam@needed.com> writes
>There are some classes of problems that are "cache busters".
>They have poor locality of reference (i.e. large memory footprint
>and visit memory addresses in a seemingly random way). One example
>of this class of problem, is simulation of chip designs. Another
>is this fluid flow problem that Harlan Stockman posted about
>here not too long ago.
>
>I would say repeat your tests with this benchmark:
>
>http://users.viawest.net/~hwstock/bench/3d0/3d0.zip
>The benchmark rates your system in units of MUPS.
>Try, for example, just bumping the multiplier a step at
>a time, and I bet the MUP rating doesn't move an inch.
>
>Post back your results :)
>
>My 1.8GHz P4 with SDRAM only got 2.73 MUPS and you should be
>able to beat that easily.

I'm a bit confused reading this.... I'd have thought that if this
program was very memory intensive, one WOULD notice the MUP rating
changing.
Maybe the smiley later on indicates a bit of sarcasm ;-)

Whatever, I got 4.7 MUPS at full FSB and it dropped to 4.0 at a lower
setting (to be honest, I did it earlier on today and now I can't exactly
remember what I set it to !?! ... getting late... time for bed!!)
17.5% difference. That's a reasonable amount. Yes, that shows that some
apps will benefit from the data throughput.
(Although I wonder how many normal people analyse fluid dynamics ;-)

What's really upset me about this benchmark though is that I also tried
it on the other half's machine which is a 2.4GHz (800MHz FSB) P4 running
at 3.0GHz.
The MUP rating went to 10.4 !! That's quick. And probably does
demonstrate a good example of where a more advanced memory architecture
does indeed shine.
I wonder if she'll notice if I swap machines ;-)

But seriously, I wonder how much difference there is in normal apps.
Maybe I'll have a play tomorrow.

Interesting.

--
__________________________________________________
Personal email for Gareth Jones can be sent to:
'usenet4gareth' followed by an at symbol
followed by 'uk2' followed by a dot
followed by 'net'
__________________________________________________
 

Paul

Splendid
Mar 30, 2004
5,267
0
25,780
Archived from groups: alt.comp.periphs.mainboard.asus (More info?)

In article <hyqbZjNos+tAFw0a@nospam.demon.co.uk>, Gareth Jones
<usenet@nospam.demon.co.uk> wrote:

> In message <nospam-2705041638110001@192.168.1.177>, Paul
> <nospam@needed.com> writes
<<snip>>
> >Post back your results :)
> >
> >My 1.8GHz P4 with SDRAM only got 2.73 MUPS and you should be
> >able to beat that easily.
>
> I'm a bit confused reading this.... I'd have thought that if this
> program was very memory intensive, one WOULD notice the MUP rating
> changing.
> Maybe the smiley later on indicates a bit of sarcasm ;-)

No, the smiley is because few people follow up, it is friendly
encouragement. Harlan didn't get much in the way of participation,
even when he requested feedback over on Abxzone.

>
> Whatever, I got 4.7 MUPS at full FSB and it dropped to 4.0 at a lower
> setting (to be honest, I did it earlier on today and now I can't exactly
> remember what I set it to !?! ... getting late... time for bed!!)
> 17.5% difference. That's a reasonable amount. Yes, that shows that some
> apps will benefit from the data throughput.
> (Although I wonder how many normal people analyse fluid dynamics ;-)
>
> What's really upset me about this benchmark though is that I also tried
> it on the other half's machine which is a 2.4GHz (800MHz FSB) P4 running
> at 3.0GHz.
> The MUP rating went to 10.4 !! That's quick. And probably does
> demonstrate a good example of where a more advanced memory architecture
> does indeed shine.
> I wonder if she'll notice if I swap machines ;-)
>
> But seriously, I wonder how much difference there is in normal apps.
> Maybe I'll have a play tomorrow.
>
> Interesting.

My only purpose is picking this application, is it is an example of
a pathological application, one that picks on the memory subsystem.
A person could write worse code on purpose, but there wouldn't be much
point. Another example of cache buster code, is the software that
runs on telephone company switching equipment. Whether the processor
has a cache or not, when running that code, makes no difference at
all. The code is called "run to completion", a linear stream of
code, so the cache typically never gets to reuse anything.

Personally, I like the empirical approach to learning, because it
makes you ask questions that learning from a textbook just doesn't
do to the same extent. Don't let any of our comments stop you!

HTH,
Paul