best sub $400 CPU

G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,nz.comp (More info?)

for my new system my cpu budget will be about US$380 (NZ$550), i really
have two Athlon64 procs to choose from, the 4000+ or the X2-3800+.

the pc will be used mainly for 3D animation (single threaded), i don't
do much rendering these days, but occasionaly. gaming performance is
very important :) most of my $$ will be going on a 7800GT or GTX and
1920x1200 display.

at this stage the 4000+ is my preference, it crushes the X2 in gaming,
but i remember when i had a dual-cpu system (even a lowly PIII) the OS
was a bit more response... so i can't decide :/ i've heard the X2-3800+
OCs pretty well, could that run at 2.4GHz and does it shorten the life
of the chip considerably...?
 

Daniel

Distinguished
Mar 30, 2004
544
0
18,980
Archived from groups: comp.sys.ibm.pc.hardware.chips,nz.comp (More info?)

gimp wrote:
>
> ... gaming performance is very important :) most of my $$ will be
> going on a 7800GT or GTX ...
>

Not worth waiting for the ATI R520 / X1800 cards?

I know, I know. They've been ages in coming, yield problems being the
biggest issue.
However, indications are that when they're launched in 4 weeks time it
will be a hard launch, similar to the 7800GTX, with volume availability.
Just thinking it would be annoying to buy a 7800, only to have it drop
in price not long after the R520 is released, or discovering that the
equivalent ATI card completely owns it.
I'm guessing the R520 may well be an overclockers dream (considering
they're built on 90nm technology).
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,nz.comp (More info?)

Daniel wrote:
>> ... gaming performance is very important :) most of my $$ will be
>> going on a 7800GT or GTX ...

> Not worth waiting for the ATI R520 / X1800 cards?

> I know, I know. They've been ages in coming, yield problems being the
> biggest issue.
*snip*
> I'm guessing the R520 may well be an overclockers dream (considering
> they're built on 90nm technology).

wouldn't yeild problems indicate the opposite?

--
http://dave.net.nz <- My personal site.
 

Daniel

Distinguished
Mar 30, 2004
544
0
18,980
Archived from groups: comp.sys.ibm.pc.hardware.chips,nz.comp (More info?)

Dave - Dave.net.nz wrote:
> Daniel wrote:
>
> *snip*
>
>> I'm guessing the R520 may well be an overclockers dream (considering
>> they're built on 90nm technology).
>
>
> wouldn't yeild problems indicate the opposite?
>

My understanding is that issues were primarily with the number of pipes
that could be successfully tapped out. After their 3rd attempt at taping
out the GPU, they seem to have sorted enough of the issues out to
actually release them. I understand that 16 pixel pipeline versions are
readily available (i.e. best yields), although, I'm not sure if 24pp or
even 32pp (imagine crossfire with those cards ... grrrr) versions will
be available in the first week of October.
As far as clock speed goes, it's 90nm fab process tech, so I'd expect
higher clock speeds (apparently 600-650 MHz stock for top-end R520 GPU
core).
I'd be very interested to see what the power consumption and heat output
numbers for the new ATI cards will be - once those annoying NDAs have
expired of course.
Sooner or later, NVIDIA will have to go 90nm as well or lower, otherwise
ATI will leave them in the dust.
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,nz.comp (More info?)

Daniel wrote:
> gimp wrote:
>> ... gaming performance is very important :) most of my $$ will be
>> going on a 7800GT or GTX ...
>>
>
> Not worth waiting for the ATI R520 / X1800 cards?
>
> Just thinking it would be annoying to buy a 7800, only to have it drop
> in price not long after the R520 is released, or discovering that the
> equivalent ATI card completely owns it.


my 3D app Maya can have issues with ATI drivers unfortunately, they're
probably getting better but the industry [at least with my app] tends
towards nVidia hardware which have been solid with Maya for several
years. but good point RE the possible price drop, i won't buy before
the ATI release anyway.
 

Daniel

Distinguished
Mar 30, 2004
544
0
18,980
Archived from groups: comp.sys.ibm.pc.hardware.chips,nz.comp (More info?)

gimp wrote:
>
> my 3D app Maya can have issues with ATI drivers unfortunately, they're
> probably getting better but the industry [at least with my app] tends
> towards nVidia hardware which have been solid with Maya for several
> years. but good point RE the possible price drop, i won't buy before
> the ATI release anyway.

Cool :)

[...readies to suggest Quadro card, then faints at price...]
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,nz.comp (More info?)

Tony Hill wrote:
> What is a bigger worry with overclocking the processor is that you
> often end up overlocking other parts of the system and usually that is
> what ends up limiting how high you can clock the chip. There are a
> number of websites out there that specialize in overclocking and which
> might offer you some decent insights into what you could expect from
> the chip.

thanks for the info :p been doing some googling and apparently people
have clocked it as high as 2.8GHz (!) 2.4 would be enough for me... i
would have to research it more as i've never OC'd and don't want to melt
down the chip/mobo.

i just found this:

http://forums.extremeoverclocking.com/archive/index.php/t-183472.html

probably the X2 is gonna win out i think :)
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,nz.comp (More info?)

"gimp" <anonymous@smeg.com> wrote in message
news:dg7ls7$okj$1@lust.ihug.co.nz...
> for my new system my cpu budget will be about US$380 (NZ$550), i really
> have two Athlon64 procs to choose from, the 4000+ or the X2-3800+.
>
> the pc will be used mainly for 3D animation (single threaded), i don't do
> much rendering these days, but occasionaly. gaming performance is very
> important :) most of my $$ will be going on a 7800GT or GTX and 1920x1200
> display.
>
> at this stage the 4000+ is my preference, it crushes the X2 in gaming, but
> i remember when i had a dual-cpu system (even a lowly PIII) the OS was a
> bit more response... so i can't decide :/ i've heard the X2-3800+ OCs
> pretty well, could that run at 2.4GHz and does it shorten the life of the
> chip considerably...?
>
>

Im in the same boat as you..... :)
Personally tho, Im going for the X2 - future proofing the system for at
least 6 months ;) (Yeah right)
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,nz.comp (More info?)

Bitstring <dg7ls7$okj$1@lust.ihug.co.nz>, from the wonderful person gimp
<anonymous@smeg.com> said
<snip>
>at this stage the 4000+ is my preference, it crushes the X2 in gaming,
>but i remember when i had a dual-cpu system (even a lowly PIII) the OS
>was a bit more response... so i can't decide :/ i've heard the
>X2-3800+ OCs pretty well, could that run at 2.4GHz and does it shorten
>the life of the chip considerably...?

Only elevated temperature shortens the life of the chip, and even then
it needs to be (IIRC) about 15c higher to halve the life (from 120 years
to 60! - OK I guessed at the 120, but that's probably the usual design
goal). Merely overclocking, doesn't do any harm (although ramping the
voltage to achieve higher clock speed definitely does reduce lifetimes
too, apart from the extra heat effect).

Chip power = constant * frequency * voltage * voltage .. as you can see,
ramping the (VCore) voltage has more effect than ramping the clock, but
still not a big issue if you can get the power away, and keep the chip
cool (and remember the AMD spec says 'it'll work for X years with the
core temperature at 80c' .. so you're probably already running well
inside the window.

Get the dual Core chip - =current= games might work better on the 4000+,
but game designers know how to code dual (or quad, or more) threaded
games, so future games may run lots nicer on a dual core chip - and even
for current games you'll at least be able to have all the WinXP OS cr&p
happening in the other CPU (which is significant sometimes).

I guess you could just stick in a XP3x00+ single core chip in now, and
wait for the XP4800+ to come down in price. 8>.

--
GSV Three Minds in a Can
Contact recommends the use of Firefox; SC recommends it at gunpoint.
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,nz.comp (More info?)

Bitstring <dgab82$msc$1@lust.ihug.co.nz>, from the wonderful person
Daniel <atari400@paradise.net.nz> said
>GSV Three Minds in a Can wrote:
>> ... but game designers know how to code dual (or quad, or more)
>>threaded games...
>>
>
>Really?
>
>Multi-threaded code is difficult enough in a single core CPU, let alone
>a multi-core CPU.

<snip>

Jeez, if you can do it for a 4 CPU workstation, what's the issue doing
it for a dual core CPU? Note I didn't say they were going to do it
=perfectly= and achieve a 2x speedup in the gameplay .... but there's
plenty of stuff that's just dying for some parallel processing (the
AI(s), the UI, the graphics upstream of the Graphics card, etc.)

I thought avoiding deadlocks was a solved problem since Knuth volume
<n>, more years ago than I care to count, and that was without the
hardware assistance we get these days ...

>However, to say that "game designers know how to code dual (or quad, or
>more) threaded games" in the context of a dual (or multi) core CPU does
>seem a little premature at this stage.

Hmm, guess I could make some money teaching courses then, it's not like
it's rocket science or anything. Now I have a few fractals programs
which =are= going to need a bit of rocket science, but hey, that's my
fault for coding them in x87 assembler ...

--
GSV Three Minds in a Can
Contact recommends the use of Firefox; SC recommends it at gunpoint.
 

Daniel

Distinguished
Mar 30, 2004
544
0
18,980
Archived from groups: comp.sys.ibm.pc.hardware.chips,nz.comp (More info?)

gimp wrote:
>
> ... most of my $$ will be going on a 7800GT or GTX and
> 1920x1200 display.
>

Assuming you mean LCD (1920x1200 native res for 23/24" LCDs - AFAIK),
what make & model are looking at?

I've read the Dell 24" LCDs are awesome monitors and compare favourably
with the Apple 24" Cinema LCDs.
Also, read that the Philips 24" LCDs aren't that flash.

Just curious.

Cheers.
 

Daniel

Distinguished
Mar 30, 2004
544
0
18,980
Archived from groups: comp.sys.ibm.pc.hardware.chips,nz.comp (More info?)

GSV Three Minds in a Can wrote:
>
> ... but game designers know how to code dual (or quad, or more) threaded
> games...
>

Really?

Multi-threaded code is difficult enough in a single core CPU, let alone
a multi-core CPU.

I know that multi-core CPUs have been around for a while, but the
instances that I've seen (and worked with) allocate individual
processes/applications to a single CPU (i.e one CPU to many apps), but,
*not* the other way around - one app to many CPUs.
Please note, I'm referring to an app that is specifically written to
work with multiple cores (i.e multi-core aware), and so is therefore
able to avoid deadlock situations *between* cores.
Sure, you can divide some problems and run them in parallel (like how a
modern GPU renders complex 3D scenes). However, these are very specific
types of problems which can be easily segmented.

AMD and Intel were basically forced to go dual core because of the
limitations they encountered with higher clock speeds.

Game developers aren't exactly jumping up and down with joy over the
prospect of developing multi-core games.
Agreed, they'll have to now - particularly with next gen consoles all
using multi-core PowerPC CPUs.

However, to say that "game designers know how to code dual (or quad, or
more) threaded games" in the context of a dual (or multi) core CPU does
seem a little premature at this stage.
 

Daniel

Distinguished
Mar 30, 2004
544
0
18,980
Archived from groups: comp.sys.ibm.pc.hardware.chips,nz.comp (More info?)

GSV Three Minds in a Can wrote:
> <snip>
>
> Jeez, if you can do it for a 4 CPU workstation, what's the issue doing
> it for a dual core CPU? Note I didn't say they were going to do it
> =perfectly= and achieve a 2x speedup in the gameplay .... but there's
> plenty of stuff that's just dying for some parallel processing (the
> AI(s), the UI, the graphics upstream of the Graphics card, etc.)
>
Okay, I'll bite.

What multi-CPU aware apps are you using then?

Just because you can run a program across X number of CPU's in a
workstation doesn't make it multi-core aware.

Also, as I said in the original post, there are some problems that can
be easily segmented - and I did mention graphics.


> I thought avoiding deadlocks was a solved problem since Knuth volume
> <n>, more years ago than I care to count, and that was without the
> hardware assistance we get these days ...
>
Avoiding deadlocks is easy. Doing so efficiently in a realtime
environment is the real trick (we already get enough deadlocks in single
core code - and yes the intention was to avoid a deadlock, but, it still
happens).

I've used shared memory and semaphores for the locking (usual IPC), to
coordinate between multiple processes running on a multi-CPU server. The
"application" in this sense is a combination of disparate processes.
However, neither of these processes "know" they're running on a
multi-CPU system. As far as they're concerned they're only running on a
single CPU. Each process has it's own process space (i.e. it's not
*shared* with the other processes).
Surely, the benefit of a multi-core CPU would be for the cores to
operate on the *same* process/memory space. Otherwise your limited to
the types of problems that both CPUs can work on simultaneously (i.e.
not much benefit vs. a single core CPU).


> Hmm, guess I could make some money teaching courses then, it's not like
> it's rocket science or anything. Now I have a few fractals programs
> which =are= going to need a bit of rocket science, but hey, that's my
> fault for coding them in x87 assembler ...
>
If you've written a multi-threaded single core program in assembler (not
to be confused with multi-tasking - sorry, just being sure), then dude -
surely you'd know the hassles involved in getting all that to work.

Now imagine all those problems multiplied because now you've got to
synchronise across 2 or more CPU cores.

Odd you should be using assembler? Compiler optimizations are pretty
good these days (unless your into deliberately writing obfuscated code
of course).
Or were you just being facetious.
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,nz.comp (More info?)

"Daniel" <atari400@paradise.net.nz> wrote in message
news:dgahfi$2gs$1@lust.ihug.co.nz...
> GSV Three Minds in a Can wrote:
>> <snip>
>>
>> Jeez, if you can do it for a 4 CPU workstation, what's the issue doing it
>> for a dual core CPU? Note I didn't say they were going to do it
>> =perfectly= and achieve a 2x speedup in the gameplay .... but there's
>> plenty of stuff that's just dying for some parallel processing (the
>> AI(s), the UI, the graphics upstream of the Graphics card, etc.)
>>
> Okay, I'll bite.
>
> What multi-CPU aware apps are you using then?
>
> Just because you can run a program across X number of CPU's in a
> workstation doesn't make it multi-core aware.

Doesn't it? I was under the impression that it had to be multi-threaded to
run on multiple-processors, and that means it will take advantage of
multi-core CPUs.

> Also, as I said in the original post, there are some problems that can be
> easily segmented - and I did mention graphics.
>
>
>> I thought avoiding deadlocks was a solved problem since Knuth volume <n>,
>> more years ago than I care to count, and that was without the hardware
>> assistance we get these days ...
>>
> Avoiding deadlocks is easy. Doing so efficiently in a realtime environment
> is the real trick (we already get enough deadlocks in single core code -
> and yes the intention was to avoid a deadlock, but, it still happens).
>
> I've used shared memory and semaphores for the locking (usual IPC), to
> coordinate between multiple processes running on a multi-CPU server. The
> "application" in this sense is a combination of disparate processes.
> However, neither of these processes "know" they're running on a multi-CPU
> system. As far as they're concerned they're only running on a single CPU.
> Each process has it's own process space (i.e. it's not *shared* with the
> other processes).
> Surely, the benefit of a multi-core CPU would be for the cores to operate
> on the *same* process/memory space. Otherwise your limited to the types of
> problems that both CPUs can work on simultaneously (i.e. not much benefit
> vs. a single core CPU).
>
>
>> Hmm, guess I could make some money teaching courses then, it's not like
>> it's rocket science or anything. Now I have a few fractals programs which
>> =are= going to need a bit of rocket science, but hey, that's my fault for
>> coding them in x87 assembler ...
>>
> If you've written a multi-threaded single core program in assembler (not
> to be confused with multi-tasking - sorry, just being sure), then dude -
> surely you'd know the hassles involved in getting all that to work.
>
> Now imagine all those problems multiplied because now you've got to
> synchronise across 2 or more CPU cores.
>
> Odd you should be using assembler? Compiler optimizations are pretty good
> these days (unless your into deliberately writing obfuscated code of
> course).
> Or were you just being facetious.

--
Derek
 

Daniel

Distinguished
Mar 30, 2004
544
0
18,980
Archived from groups: comp.sys.ibm.pc.hardware.chips,nz.comp (More info?)

Daniel wrote:
> GSV Three Minds in a Can wrote:
>
> If you've written a multi-threaded single core program in assembler (not
> to be confused with multi-tasking - sorry, just being sure), then dude -
> surely you'd know the hassles involved in getting all that to work.
>
> Now imagine all those problems multiplied because now you've got to
> synchronise across 2 or more CPU cores.
>
> Odd you should be using assembler? Compiler optimizations are pretty
> good these days (unless your into deliberately writing obfuscated code
> of course).
> Or were you just being facetious.

Plus debugging multi-threaded code is a nightmare.

Debugging multi-threaded multi-core code.... yuk.
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,nz.comp (More info?)

Bitstring <dgahfi$2gs$1@lust.ihug.co.nz>, from the wonderful person
Daniel <atari400@paradise.net.nz> said
>GSV Three Minds in a Can wrote:
>> <snip>
>> Jeez, if you can do it for a 4 CPU workstation, what's the issue
>>doing it for a dual core CPU? Note I didn't say they were going to do
>>
>> =perfectly= and achieve a 2x speedup in the gameplay .... but there's
>> plenty of stuff that's just dying for some parallel processing (the
>>AI(s), the UI, the graphics upstream of the Graphics card, etc.)
>>
>Okay, I'll bite.
>
>What multi-CPU aware apps are you using then?

None at the moment, because I can't afford a multi CPU workstation to
play with, which is why I've been drooling over the x2 chips for some
time..
Media encoding and rendering are the obvious apps which would soak up
lots of cores/CPUs with ease.

<snip>
>
>> Hmm, guess I could make some money teaching courses then, it's not
>>like it's rocket science or anything. Now I have a few fractals
>>programs which =are= going to need a bit of rocket science, but hey,
>>that's my fault for coding them in x87 assembler ...
>>
>If you've written a multi-threaded single core program in assembler
>(not to be confused with multi-tasking - sorry, just being sure), then
>dude - surely you'd know the hassles involved in getting all that to
>work.

Minor hassles, and certainly no worse than writing interrupt driven OS
code and trying to weave that around the regular applications. Some game
(engine) designers are really smart people (some are just glorified
graphics artists or authors or musicians, which is why the team are now
so huge).

Yeah, debugging is an issue, but hey, these folks can't debug what they
deliver today, so nothing new there.. 8>.

>Now imagine all those problems multiplied because now you've got to
>synchronise across 2 or more CPU cores.
>
>Odd you should be using assembler? Compiler optimizations are pretty
>good these days (unless your into deliberately writing obfuscated code
>of course).
>Or were you just being facetious.

Nope, there (was) no other way of doing 80bit maths on an x87 and
keeping the (right) 80 bit values in the (right) x87 registers in the
stack at that time. These days maybe you could do as well with SSEn,
although I suspect not.

If you want to 'fly through' a Julia set you can fly a lot deeper (in
reasonable time - i.e. without getting into multi-word arithmetic) with
80 bit operands than you can with 64 bits, before you run into the
pixellation limit (i.e. where adjacent pixels have the =same= floating
point value to N bits, and your picture blacks out). All the C/C++
compilers I looked at didn't really believe in 80bit data values, and
certainly didn't have a clue as to how to leave them in the FPU for the
whole of a scan line.

I've got some code sitting here which ought be able to soak up however
many cores I can afford to throw at it - one thread for the display
(rate limited by how fast the user flies &/or the availability of frame
buffers), one for the UI (may be mostly idle), and 1-N threads (1 per
frame) doing the calculation, (allowing as how you may have to dump
future frames if the user decides to scroll sideways). All one process,
although nobody gets to play with anyone else's frame buffer until it is
'done', and there is no interaction between frame<N> and frame <N+1>
during the calculation phase. Actually there isn't any interaction
between each scan line and the next one IIRC, so I could actually toss
1024 cores at =each= frame buffer (but it isn't coded that way).

Chess plays pretty well on multi-CPU systems of course, an I don't see
why an X2 is going to be any different from a two CPU workstation in
that regard - Fritz<n> should be able to handle it right out of the box.
Not that I'm very excited by that, except for analysis - I already can't
beat Fritz on an XP3000+.

For something like Morrowind, I guess you'd turn most of the processing
power loose on the 'wandering monsters' (and sundry mobile bits of
scenery/weather) which need animating, and where the interaction between
the 'objects' is actually pretty small (and again you can play the 'next
frame, frame after that' trick).

Wait and see .. however history says that game designers have never let
complications of technology stand in the way of consuming all the PC
they can find, and then some - and in 5(?) years time I bet you'll have
trouble buying a single core desktop CPU chip.

--
GSV Three Minds in a Can
Contact recommends the use of Firefox; SC recommends it at gunpoint.
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,nz.comp (More info?)

Bitstring <dgctac$g2t$1@lust.ihug.co.nz>, from the wonderful person
Daniel <atari400@paradise.net.nz> said
<mega snip>
>However, I still disagree with your initial statement about the current
>capability of developers in regards to multi-core CPUs.

I guess we'll just have to agree to disagree .. I'll allow as how they
aren't pushing any such games out the door yet, and as how 80% of the
games design team are technologically clueless, but in there someplace
there are some smart cookies who are quite competent to use multiple
cores (or CPUs) .. and are probably coding already. If not, they're
missing a trick.

Whether it'll add anything significant to the FPS (First Person Shooter
- not Frames/sec) I don't know, but it'll probably make Civilization 5,
or Warlords 4, or whatever, even harder to beat than they are already.
If nothing else, I'll maybe be able to play Morrowind 4 and read mail at
the same time (or maybe not - games tend to be pretty 'selfish', so the
games engine will probably steal all the cores/CPUs it can find).

Anyway, I'm off to shop in the next few weeks .. just waiting for the
X2s to get a teeny weeny bit less expensive, and for the pioneers to
finish debugging the motherboards and BIOSs for me, and the
proliferation of PSU specs to shake out.

Now if only Intel would give AMD some serious competition, we might see
prices come down a bit faster .... hmm, weren't we saying that the other
way round ~4 years ago?

--
GSV Three Minds in a Can
Contact recommends the use of Firefox; SC recommends it at gunpoint.
 

Daniel

Distinguished
Mar 30, 2004
544
0
18,980
Archived from groups: comp.sys.ibm.pc.hardware.chips,nz.comp (More info?)

Derek Baker wrote:
>>
>>What multi-CPU aware apps are you using then?
>>
>
> Doesn't it? I was under the impression that it had to be multi-threaded to
> run on multiple-processors, and that means it will take advantage of
> multi-core CPUs.
>
>
Your right - it does.

I was incorrect with a few of my assertions and line of thinking.
 

Daniel

Distinguished
Mar 30, 2004
544
0
18,980
Archived from groups: comp.sys.ibm.pc.hardware.chips,nz.comp (More info?)

GSV Three Minds in a Can wrote:
>>
>> What multi-CPU aware apps are you using then?
>
> None at the moment, because I can't afford a multi CPU workstation to
> play with, which is why I've been drooling over the x2 chips for some
> time..
> Media encoding and rendering are the obvious apps which would soak up
> lots of cores/CPUs with ease.
>
My error. Just needs to be multi-threaded.


>
> Minor hassles, and certainly no worse than writing interrupt driven OS
> code and trying to weave that around the regular applications. Some game
> (engine) designers are really smart people (some are just glorified
> graphics artists or authors or musicians, which is why the team are now
> so huge).
>
> Yeah, debugging is an issue, but hey, these folks can't debug what they
> deliver today, so nothing new there.. 8>.
>
True.


>
> Nope, there (was) no other way of doing 80bit maths on an x87 and
> keeping the (right) 80 bit values in the (right) x87 registers in the
> stack at that time. These days maybe you could do as well with SSEn,
> although I suspect not.
>
> If you want to 'fly through' a Julia set you can fly a lot deeper (in
> reasonable time - i.e. without getting into multi-word arithmetic) with
> 80 bit operands than you can with 64 bits, before you run into the
> pixellation limit (i.e. where adjacent pixels have the =same= floating
> point value to N bits, and your picture blacks out). All the C/C++
> compilers I looked at didn't really believe in 80bit data values, and
> certainly didn't have a clue as to how to leave them in the FPU for the
> whole of a scan line.
>
> I've got some code sitting here which ought be able to soak up however
> many cores I can afford to throw at it - one thread for the display
> (rate limited by how fast the user flies &/or the availability of frame
> buffers), one for the UI (may be mostly idle), and 1-N threads (1 per
> frame) doing the calculation, (allowing as how you may have to dump
> future frames if the user decides to scroll sideways). All one process,
> although nobody gets to play with anyone else's frame buffer until it is
> 'done', and there is no interaction between frame<N> and frame <N+1>
> during the calculation phase. Actually there isn't any interaction
> between each scan line and the next one IIRC, so I could actually toss
> 1024 cores at =each= frame buffer (but it isn't coded that way).
>
> Chess plays pretty well on multi-CPU systems of course, an I don't see
> why an X2 is going to be any different from a two CPU workstation in
> that regard - Fritz<n> should be able to handle it right out of the box.
> Not that I'm very excited by that, except for analysis - I already can't
> beat Fritz on an XP3000+.
>
Yes. As long as problems can be segmented in such a way as it makes it
easy for a multi-core CPU to operate on them efficiently, then you get a
significant performance boost (i.e. GPU's with multiple pipelines).

Game engines (for twitch games at least), revolve around very tight
loops. You can certainly offload a number of tasks to run in parallel,
however, the trick is to do so without incurring a significant penalty
during execution (i.e. avoiding CPU cache misses as much as possible).


> For something like Morrowind, I guess you'd turn most of the processing
> power loose on the 'wandering monsters' (and sundry mobile bits of
> scenery/weather) which need animating, and where the interaction between
> the 'objects' is actually pretty small (and again you can play the 'next
> frame, frame after that' trick).
>
Yep. Indeed when interaction with other objects is at a minimum, then
processing the surrounding environment perhaps isn't such a big deal.
For turn based games (strategy) like chess, then one would expect a
noticable benefit with multi-cores.
The tricky situations are where there is a lot of interaction going on -
again this is mostly likely for realtime games (team sports, FPS, RTS).


> Wait and see .. however history says that game designers have never let
> complications of technology stand in the way of consuming all the PC
> they can find, and then some - and in 5(?) years time I bet you'll have
> trouble buying a single core desktop CPU chip.
>
As my work colleague says "threads are evil", and life is sooo much
easier without them (google it online - a few interesting links).
I have no doubt, game developers will eventually learn to make the most
of multi-core CPUs.
I agree, I imagine once more programs start appearing that take
advantage of multiple cores then people may ask how we ever made do with
single core CPUs.

However, I still disagree with your initial statement about the current
capability of developers in regards to multi-core CPUs.
 

Daniel

Distinguished
Mar 30, 2004
544
0
18,980
Archived from groups: comp.sys.ibm.pc.hardware.chips,nz.comp (More info?)

GSV Three Minds in a Can wrote:
>
> I've got some code sitting here which ought be able to soak up however
> many cores I can afford to throw at it - one thread for the display
> (rate limited by how fast the user flies &/or the availability of frame
> buffers), one for the UI (may be mostly idle), and 1-N threads (1 per
> frame) doing the calculation, (allowing as how you may have to dump
> future frames if the user decides to scroll sideways). All one process,
> although nobody gets to play with anyone else's frame buffer until it is
> 'done', and there is no interaction between frame<N> and frame <N+1>
> during the calculation phase. Actually there isn't any interaction
> between each scan line and the next one IIRC, so I could actually toss
> 1024 cores at =each= frame buffer (but it isn't coded that way).
>
> Chess plays pretty well on multi-CPU systems of course, an I don't see
> why an X2 is going to be any different from a two CPU workstation in
> that regard - Fritz<n> should be able to handle it right out of the box.
> Not that I'm very excited by that, except for analysis - I already can't
> beat Fritz on an XP3000+.
>

Time for a link:
http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=2377&p=4

Quoting Tim Sweeny:
"Writing multithreaded software is very hard; it's about as unnatural to
support multithreading in C++ as it was to write object-oriented
software in assembly language. The whole industry is starting to do it
now, but it's pretty clear that a new programming model is needed if
we're going to scale to ever more parallel architectures. I have been
doing a lot of R&D along these lines, but it's going slowly."

I would say Tim Sweeny knows a bit more about game programming than
either of us.
 

mygarbage2000

Distinguished
Jun 5, 2002
126
0
18,680
Archived from groups: comp.sys.ibm.pc.hardware.chips,nz.comp (More info?)

On Fri, 16 Sep 2005 11:34:59 +1200, Daniel <atari400@paradise.net.nz>
wrote:

>GSV Three Minds in a Can wrote:
>>
>> I've got some code sitting here which ought be able to soak up however
>> many cores I can afford to throw at it - one thread for the display
>> (rate limited by how fast the user flies &/or the availability of frame
>> buffers), one for the UI (may be mostly idle), and 1-N threads (1 per
>> frame) doing the calculation, (allowing as how you may have to dump
>> future frames if the user decides to scroll sideways). All one process,
>> although nobody gets to play with anyone else's frame buffer until it is
>> 'done', and there is no interaction between frame<N> and frame <N+1>
>> during the calculation phase. Actually there isn't any interaction
>> between each scan line and the next one IIRC, so I could actually toss
>> 1024 cores at =each= frame buffer (but it isn't coded that way).
>>
>> Chess plays pretty well on multi-CPU systems of course, an I don't see
>> why an X2 is going to be any different from a two CPU workstation in
>> that regard - Fritz<n> should be able to handle it right out of the box.
>> Not that I'm very excited by that, except for analysis - I already can't
>> beat Fritz on an XP3000+.
>>
>
>Time for a link:
>http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=2377&p=4
>
>Quoting Tim Sweeny:
>"Writing multithreaded software is very hard; it's about as unnatural to
>support multithreading in C++ as it was to write object-oriented
>software in assembly language. The whole industry is starting to do it
>now, but it's pretty clear that a new programming model is needed if
>we're going to scale to ever more parallel architectures. I have been
>doing a lot of R&D along these lines, but it's going slowly."
>
>I would say Tim Sweeny knows a bit more about game programming than
>either of us.

In plain old C++ multithreading may be cumbersome. However, in C# or
even (cough) VB.NET it is relatively easy to run as many threads as
your app has tasks that may run asynchronously (though hardly ever had
to deal with more than 5 at a time). A bit more tricky is when you
need to synchronize the threads at some point, but still you don't
have to be a rocket scientist. While I am not a guru in Assembly
programming or C++ like some folks here, but still MCAD (Microsoft
Certified App Developer for those who don't know), so I supposedly
know a thing or two about the programming, including multi-threaded
apps. Believe me, with the spread of .NET soon most business (and not
only business) apps will be multithreaded
Rgds,
NNN
 

Daniel

Distinguished
Mar 30, 2004
544
0
18,980
Archived from groups: comp.sys.ibm.pc.hardware.chips,nz.comp (More info?)

nobody@nowhere.net wrote:
>
> In plain old C++ multithreading may be cumbersome. However, in C# or
> even (cough) VB.NET it is relatively easy to run as many threads as
> your app has tasks that may run asynchronously (though hardly ever had
> to deal with more than 5 at a time). A bit more tricky is when you
> need to synchronize the threads at some point, but still you don't
> have to be a rocket scientist. While I am not a guru in Assembly
> programming or C++ like some folks here, but still MCAD (Microsoft
> Certified App Developer for those who don't know), so I supposedly
> know a thing or two about the programming, including multi-threaded
> apps. Believe me, with the spread of .NET soon most business (and not
> only business) apps will be multithreaded
> Rgds,
> NNN
>

Errr.... oookaaay.

Free plug for C# or VB.NET.

But, I'm not quite sure how it relates to the issue?

I've also written threaded programs (albeit in Java). Being able to
write multi-threaded code is not the issue.

The issue was whether or not game developers "... know how to code dual
(or quad, or more) threaded games...", in reference to multi-core CPUs.
My stance is that the gaming industry is slowly learning to do this.
If you're talking business application or server-side programming, then
sure - multiple CPUs does a make big difference. Always has. I'm not
disputing that.

Did you read the article from that link?

Tim Sweeny said "Writing multithreaded software is very hard".

Even John Carmack and Gabe Newel have expressed concerns about multicore
technology (primarily due to the massive learning curve).

These guys are heavyweights in the gaming development industry who
really know their stuff.

Dude, if I find out Quake 4 is written in VB.NET, then I'll become a
VB.NET convert overnight, and then proceed to convert the rest of my
Unix comrades.
 

mygarbage2000

Distinguished
Jun 5, 2002
126
0
18,680
Archived from groups: comp.sys.ibm.pc.hardware.chips,nz.comp (More info?)

On Fri, 16 Sep 2005 13:19:09 +1200, Daniel <atari400@paradise.net.nz>
wrote:

>nobody@nowhere.net wrote:
>>
>> In plain old C++ multithreading may be cumbersome. However, in C# or
>> even (cough) VB.NET it is relatively easy to run as many threads as
>> your app has tasks that may run asynchronously (though hardly ever had
>> to deal with more than 5 at a time). A bit more tricky is when you
>> need to synchronize the threads at some point, but still you don't
>> have to be a rocket scientist. While I am not a guru in Assembly
>> programming or C++ like some folks here, but still MCAD (Microsoft
>> Certified App Developer for those who don't know), so I supposedly
>> know a thing or two about the programming, including multi-threaded
>> apps. Believe me, with the spread of .NET soon most business (and not
>> only business) apps will be multithreaded
>> Rgds,
>> NNN
>>
>
>Errr.... oookaaay.
>
>Free plug for C# or VB.NET.
>
>But, I'm not quite sure how it relates to the issue?
>
>I've also written threaded programs (albeit in Java). Being able to
>write multi-threaded code is not the issue.
>
>The issue was whether or not game developers "... know how to code dual
>(or quad, or more) threaded games...", in reference to multi-core CPUs.
>My stance is that the gaming industry is slowly learning to do this.
>If you're talking business application or server-side programming, then
>sure - multiple CPUs does a make big difference. Always has. I'm not
>disputing that.
>
>Did you read the article from that link?
>
>Tim Sweeny said "Writing multithreaded software is very hard".
>
>Even John Carmack and Gabe Newel have expressed concerns about multicore
>technology (primarily due to the massive learning curve).
>
>These guys are heavyweights in the gaming development industry who
>really know their stuff.
>
>Dude, if I find out Quake 4 is written in VB.NET, then I'll become a
>VB.NET convert overnight, and then proceed to convert the rest of my
>Unix comrades.

Surely Q4 or any upcoming game title for that matter will NOT be
written in any of .NET languages. Yet computers exist and are bought
not because of and not exclusively for games. If that was the case, a
console for a fraction of the price would be better solution. It's the
ability to run general apps that makes a PC worthwhile. Business
apps, OTOH, are migrating to .NET in droves. Need a proof? The app
I'm supporting has, among other features, links to third party sites.
Every now and then I have to update these links and guess what? every
time it is about replacing .cfm or .php or .plx or sometimes even .jsp
and .asp to, you guessed it, .aspx. It was never .aspx or .asp to
something non-Microsoft.
As to what multithreading can do for apps, witness Nero. Their 6.0,
when encoding video, would hog 100% of one CPU while the other stands
by idling (I'm running dual Opty). 6.6 changes the picture
drastically - now both CPUs are loaded on average 90% each (I guess
the limit now is the disk write speed), and seems that things are
faster more than twice.
And, back to games, I highly doubt Q4 would run under any flavor of
*nix, including Linux. Even if it will, it will be released
significantly later than Windows version, and it will be lower
framerate at lesser video settings on the same hardware. No, I am not
trying to convert you to forsake the blessed realm of UNIX for the
cursed kingdom of unclean Bill. UNIX still has its place in big tin,
though x86/Windows already started nipping on its heels.
Rgds,
NNN
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,nz.comp (More info?)

On Sat, 17 Sep 2005 00:39:52 +0000, nobody@nowhere.net wrote:

<snip>
> And, back to games, I highly doubt Q4 would run under any flavor of *nix,
> including Linux. Even if it will, it will be released significantly later
> than Windows version, and it will be lower framerate at lesser video
> settings on the same hardware. No, I am not trying to convert you to
> forsake the blessed realm of UNIX for the cursed kingdom of unclean Bill.
> UNIX still has its place in big tin, though x86/Windows already started
> nipping on its heels. Rgds,
> NNN

Uhh, yeah. Actually, you are a 100% wrong here. Quake is id's franchise,
and that particular game producer is remarkable in the amount of support
they provide Unix and Linux in particular. Right now, from Doom 1 to the
latest game they produce, Doom 3, and all the games inbetween, they have
either a native Linux or Unix client. The first Doom-engine games and
Quake 1, 2, 3 have all been open-sourced and have native clients available
for many Unixes beside Linux.

Quake 4 is based the Doom 3 technology and I would be very surprised if a
native Linux client isn't available.

Also, on my hardware with same configs, the Windows and Linux Doom 3
clients are about 1 fps apart in timedemos. In you arguement, replace Q4
with HL3 and you will be back on track.

And, hey, just to bring it all full circle, I'm typing this on a SMP box
I've had for years.

--
Peter Griffin: Chris, everything I say is a lie. Except that. And that. And
that. And that. And that. And that. And that. And that.
 

Daniel

Distinguished
Mar 30, 2004
544
0
18,980
Archived from groups: comp.sys.ibm.pc.hardware.chips,nz.comp (More info?)

nobody@nowhere.net wrote:
>
> Surely Q4 or any upcoming game title for that matter will NOT be
> written in any of .NET languages. Yet computers exist and are bought
> not because of and not exclusively for games. If that was the case, a
> console for a fraction of the price would be better solution. It's the
> ability to run general apps that makes a PC worthwhile. Business
> apps, OTOH, are migrating to .NET in droves. Need a proof? The app
> I'm supporting has, among other features, links to third party sites.
> Every now and then I have to update these links and guess what? every
> time it is about replacing .cfm or .php or .plx or sometimes even .jsp
> and .asp to, you guessed it, .aspx. It was never .aspx or .asp to
> something non-Microsoft.
> As to what multithreading can do for apps, witness Nero. Their 6.0,
> when encoding video, would hog 100% of one CPU while the other stands
> by idling (I'm running dual Opty). 6.6 changes the picture
> drastically - now both CPUs are loaded on average 90% each (I guess
> the limit now is the disk write speed), and seems that things are
> faster more than twice.
> And, back to games, I highly doubt Q4 would run under any flavor of
> *nix, including Linux. Even if it will, it will be released
> significantly later than Windows version, and it will be lower
> framerate at lesser video settings on the same hardware. No, I am not
> trying to convert you to forsake the blessed realm of UNIX for the
> cursed kingdom of unclean Bill. UNIX still has its place in big tin,
> though x86/Windows already started nipping on its heels.
> Rgds,
> NNN
>
[Sigh] - 'tis time for me to get off this particular very well-ridden
bus. I might get back on if you ever decide to return the issue -
assuming you even understand what it is.

Enjoy your crusade NN.

[...sits on the fence, awaiting the UNIX hordes to arrive...]