Sign in with
Sign up | Sign in
Your question

Games just don't uses multiple CPU cores?

Last response: in CPUs
Share
March 19, 2007 4:52:37 AM

So why aren't developers investing the time and money into producing a better end user experience by utilizing multiple cores?

Multiple CPU systems have been around for a long long long time, why have developers (and those companies paying the developers) NOT included specific support for multi-core systems?

Here is the problem:

CPU makers, can't really go much faster (Ghz) without power/heat issues
Software engineers, we wanna code support for multiple CPUs, but the bean counters say "NO, it takes longer and is more expensive". So what are we left with ??

A hardware industry saying "this is the future", a software industry saying "we can't spend the time/money to support"-- the end result?? Clunking slow games (Microsoft's FSX comes to mind) that under-utilize the current hardware.

This is a pretty significant problem and the software engineers don't appear to be agreeing with the hardware engineers.

Microsoft don't care, their mission is clear "generate revenue for the least amount of work and go leverage it" -- so where does that leave the consumer -- the XBOX360, PS3? Is this the death PC gaming will receive?

Rob.
March 19, 2007 5:20:46 AM

Well, the latest title in my collection would be Battlefield 2142, and when I wasn't maxing out my dual core processor with BOINC, it seemed to me that both cores were being partially utilized by the game. One core was around 60%, and the other around 40%. So I can tell you that these games do exist, you just might not be looking hard enough.
March 19, 2007 5:55:00 AM

Yes Battlefield 2142 is a wonderfull example of bad coding.

AMD officially started shipping the Athlon 64 X2 at Computex, on 1 June 2005. That is almost two years and the crappy battlefield engine doesnt support multithreading.
I play this game a lot and it's reallyu great but it's a real cpu hog.
I need to overclock my athlon64 x2 and my memory to their limits to get good frame rates.
Related resources
March 19, 2007 5:57:08 AM

There is a system out there that matches h/ware and s/ware....
Its called a Mac :lol:  :lol:  :lol: 

Seriously tho... game development right now is behind h/ware dev. Just look at DX10. Having said that most of the decent games that have been around for 6 months or more DO infact utilize at least some part of the 2nd core. I know this from the bar graph on my G15 KB.

I also know that games like Oblivion arent REALLY optimised for dual core.... (or anything less than 8800GTX).

Given the fact a game MAY be in development for some time be4 we hear about it... it can be a tought choice.

Keep going single core?
Start over and make for dual core?
Get half way thru DC ver then hear abt DX10... start over for DX10 Ver?
Get DX10 half way done... find out Intel is selling Quads for $200... start over with quad ver?

IBM/PC has ALWAYS had poorly matched s/ware and h/ware, but i think its getting better.
March 19, 2007 6:02:52 AM

Quote:
No there is a difference. Those Games are CPU intensive but the only reason they use both cores is that windows is passing the extra load off to the 2nd core. In multi threaded apps the program uses both cores at the same time. So if your running a dual core CPU with 2 GHz on each core the program will be running like it's on a 4 GHz CPU. Both cores will be running at the same percentage of load at the same time. Each core will be handling half the calculations pushing out the end result twice as fast. I’m sure someone else can explain it better then I can.


Didn't seem that way to me, but if you've got a way to back that up, I'll be convinced.
March 19, 2007 7:01:04 AM

i agree with jack there.. and we will probably see more multicore threaded games in 2007 :b its the future.
March 19, 2007 7:10:05 AM

http://www.gamespot.com/features/6166198/p-6.html

Check this page out. The benchmarks show a lower-clocked quad-core handily beating a higher-clocked dual-core by quite a bit. Whoever is saying multi-core applications don't exist is living on another planet. Have you tried runninf either Supreme Commander or Company of heroes with full physics on a single-core? Try it and watch your CPU check in for post-traumatic disorder. hehe.
March 19, 2007 7:24:52 AM

we might get more ghz and faster cpu's but we will need more cores anyway, a 4.5 ghz quadcore sounds pretty nice 2 me tho.
March 19, 2007 7:47:16 AM

Graphics programming on a multithreaded base is a bit complicated due to race hazards between the GFX card and the cores. On a code level, most games utilize game engines which they purchase for like 300 000$ (per game title) and these evolve every two years or even more (the original Unreal Engine is still in use today). Most of the CPU intensive code is integrated in the engine itself (collision, rendering, etc...), so newer games need to rely on "older" graphics "middleware" aka Gaming engines.
And you have to understand, that usually, if a developer doesn't go multithreaded, it might just mean he doesn't have to (he gets enough output from a single core).
PS: even though multithreaded applications are supposed to run on multiple cores without modification, there are some coding practices that prevent this behavior (bad synch of the CPU cache, etc...)..
March 19, 2007 7:52:50 AM

Quote:
So why aren't developers investing the time and money into producing a better end user experience by utilizing multiple cores?

Multiple CPU systems have been around for a long long long time, why have developers (and those companies paying the developers) NOT included specific support for multi-core systems?

Here is the problem:

CPU makers, can't really go much faster (Ghz) without power/heat issues
Software engineers, we wanna code support for multiple CPUs, but the bean counters say "NO, it takes longer and is more expensive". So what are we left with ??

A hardware industry saying "this is the future", a software industry saying "we can't spend the time/money to support"-- the end result?? Clunking slow games (Microsoft's FSX comes to mind) that under-utilize the current hardware.

This is a pretty significant problem and the software engineers don't appear to be agreeing with the hardware engineers.

Microsoft don't care, their mission is clear "generate revenue for the least amount of work and go leverage it" -- so where does that leave the consumer -- the XBOX360, PS3? Is this the death PC gaming will receive?

Rob.



Because development isnt a case of clicking the multicore button. There is work to be done.
Software is millions of lines of code. You can't just make it multi core compatible over night.

Yes multi cpu has been around a long time, but not in the mainstream. Server software like apache, mysql, that can run multi-core.
March 19, 2007 8:05:56 AM

For those seeing multiple core use on current processors:
- if you see a core busy at 60% and the other at 40%, it means that threads are indeed separated and allocated to different cores, however each thread is waiting on the other - as such overall the program goes no faster than a 100% single core.
- if you see a 110-115% cumulative load, don't forget you have an antivirus running, an OS, an integrated disk controller and sound chip: those use up a few percent of completely separated processes - above applies.

Right now, apps are merely thread-safe (meaning you have much less interprocess timing problems) but not yet multithreaded (meaning it runs several processes that don't depend on each other).
March 19, 2007 8:30:10 AM

SSS_DK has a good point here. With multiple thread programming you have to avoid a few spurious events happening, known as Race Conditions and Deadlocks. You also get a lot of waiting around. I'll try and explain then below.

When you have 2 processors working on a shared task, you might find that one processor is completeing its tasks faster than the 2nd core, so it spends a lot of its time waiting around. In situations like these, the speed increas on a multicore system isnt anywhere near the amount you;d think. In some cases, you're seeing very little increase, but usually, you're lookit at no more than 50% (Seeing as half the time, the other core is idle).

A Race condition is where you get both cores requesting the same data, and modifying it for later use. An example (Which I've pinched from Custom PC magazine) is say you have 2 AI routines running in a game. Each AI runs on its own core. When an AI charactor shoots you, it fetches your health as a value, removes one point from it, then updates the value so you see your overall health decrease... Now what happens when both shoot you at the same time?

Start with a health of 2. Both AI's shoot you. Core 1 grabs the current health value, then does it calculations on it. HOWEVER... before Core 1 has a chance to store and update that value, Core 2 has already read it. Both cores are now working on the original value, and both subtract 1. Core 1 stores, then Core 2 stores. The result is you are still alive, when the damage done should have killed you. Fun, yes? You can lock individual threads with a MUTEX (Mutual Exclusion) wich prevents one thread from reading data until another thread has finished with it, but then you're running in serial, and you get way less performance from your dual core setup. However, careless use of MUTEX's leads to...

Deadlocks, which are even more fun. Ok, so in your game, the enemies health is displayed next to your crosshairs, along with your own health score, right? You wave it over the enemy, and the graphics engine runs the routine to update the health meter, yours first, locking the data it need to run it with a MUTEX. The AI routine, however, has run its own routine which decides wether or not to attack you based on its own health compared to yours. It cant read your health data currently, so it enters a blocked state for the time being, waiting for the MUTEX to finish.

Now the fun begins... The graphics engine now needs to read the health value of the enemy AI to finish its routine. However, the AI is running its own routine to compare health, so its locked the data representing its own health value, so it can finish its own calculation. The Graphics engine then enters a blocked state, waiting for the AI routine to finish.

See the problem here? Both are now locked, waiting for the other to finish to access each others data. The result? Frozen harder than Outlook on a Patch Tuesday.

In literally millions of lines of code, you have to make sure this never happens. In serial code, sure, easy enough. Sequential code is nice, fas and easy to sort out, compared to parallel code which might not even bring up the same bugs twice in a row. Hell, you might only get it one time in a thousand. But ts still there, and its a bastard to solve.

Despite everything I've written, I'm not a programmer. It took me a long time to understand everything above, and I might not have put it clearly (Any programmers, tell me if I went wrong anywhere). But thats the basic gist of things, and thats why massivley parallel games arent everywhere.
March 19, 2007 8:55:15 AM

Maybe someone can answer a couple of questions for me? I think I grasp that you've literally got to recode a piece of software in order for it to be multi-threaded with parallel execution. (Btw, the number of _processes_ would not increase in a genuinely multi-core app). It must take quite a bit of ingenuity in order to figure out how one could spread the load so that one core can genuinely get on with something another core was going to do.

But what I don't get is the following:
1. Will it be much more hassle to code for 8 cores (say)? I mean, are we going to face this delay every time the market starts to move towards more cores? Are we going to have this ridiculous situation where everyone's got 8 cores, and the only multi-core software uses 2 cores at the most?
2. Why are Intel banging on about quad-core gaming? Stuff like Company of Heroes, fair enough. But for a lot of games, aren't you just going to have 4 cores waiting around for the GFX?
March 19, 2007 9:00:10 AM

That's bloody well explained. I was reading about 'race conditions' and hadn't understood a word!

It seems to suggest that programmers will have to code for a specific number of parallel execution threads, which is probably going to be stuck at 2 for a long old time, if that.

Intel will probably still sell quad-core on marketing alone!
March 19, 2007 9:07:39 AM

Whats missing here is, as apps go multi threaded, as stated above, 50% more/faster on a dual core can be read possibly 100% for a quad. Right?
March 19, 2007 9:10:58 AM

if u read a little valve is more excited about quad than dual core
March 19, 2007 9:18:32 AM

Quote:
Whats missing here is, as apps go multi threaded, as stated above, 50% more/faster on a dual core can be read possibly 100% for a quad. Right?


I don't understand. What's your point :?:
March 19, 2007 9:19:57 AM

I read, Ive read til Im red in the face. I was making a point. Anyone whos ever had a PC has used multi threaded apps even on a single core, or multitasked. Duo is good for multi tasking, but quad rulz for multithreading
March 19, 2007 9:24:20 AM

No. 50% is pretty much an average maxiumum for good code. On a quad core system, you'd run dual proc code slower, as you're only utilising 2 cores, instead of all 4 (Bear in mind that the more cores you have, the slower they are likely to be clocked, hence the speed loss overall).

The big trap people keep falling into is assuming more cores = more power.

If the code is coarse threaded, then the programs dont scale well. With a ratio of one core to one thread, sure, you get speed, but its not the best use of resources. Currently, most games tend to use coarse threading, if they use it at all. The Half-Life 2 Engine uses a mixture of couarse and fine threading, which while not as good as 100% fine, gives a better mix and allows the engine to dole out tasks to certain cores as the need arises, while keeping the cores busy with a semi-dedicated task as well.

Games suffer from the fact that all these threads need to be synched every frame. If you're running at 60fps, then every 1/60th of a second, your threads need to know whats going on. In contrast, something like Folding@Home doesnt need to synch threads, as each one takes its data, and runs until finished, regardless of what the others are doing.

So, ultimatly, unless the game is A) multithreaded and 2) uses a good threading method, then iii) you wont see much improvement on a 4 core or 8 core system over a dual core.

Hell, even todays single core systems are more than capable of running pretty much all the games out there. The main speed increase you see when running on a dual proc machine is that the OS takes one core, and the game uses the other. Less task switching, = more speed.

And for the record, Windows XP is pretty damn good at making sure things like this happens. Dont forget your OS is also a vital part of the speed increase stage. If your OS cant allocate tasks to multiple cores efficiently, it doesnt matter how many you have, it just wont be very good. I imagine Linux can do the same fairly well?
March 19, 2007 9:38:11 AM

Good stuff CableTwitch. Can you answer my questions at all? For fine-threading, will programmers design for a specific number of cores, or will they leave it fairly open or program ahead?

For instance, one could fine-thread a program into 16 threads that run in parallel (avoiding any race problems or deadlocks), and then a single-core system would run them all in sequence (or time-slice them, whatever). a duo-core would run 8 on one; 8 on another, a quad-core would run 4 on each etc.

Or instead will they just think: feck it, let's just design for 2 cores. We'll worry about programming for 4 or 8 when we have to.

What I'm basically interested in is whether it is simply the move to parallel execution multi-threading that creates the programming problems; or if it is always going to be a huge decision how many cores one should optimise the program for.
March 19, 2007 9:41:37 AM

As far as I'm aware, fine threading can adapt to the number of cores, and dosh out threads to cores as-and-when the need arises.

As for the race and Deadlocks... thats down to bug squashing, not just making sure certain threads dont run with others. In fact, from what I can see, you're more likely to encounter them with the more threads you run, so the problem becomes proportinal to the number of cores you're running.

I could be wrong, so its best to double check this. But from what I've managed to make out from this, thats what I can see happening.

Also, whereabouts in the UK are you, out of curiosity? Reading-based loonatic here....
March 19, 2007 9:43:02 AM

Linux is great, but underused. Vista allows for better multi threading and hopefully the next M$ product due out in 2 years, will even be better yet. One question I do have is, is it better to write (in general) multi or in 64?
March 19, 2007 9:45:34 AM

i read this very same article and a great article it was to. it explains very clearly the dificulties behind multithreading an application (through the example of a game) however Mpilchfamily, jumping jack and MrMez have hit the nail on the head

yes most games released in the past year have been on going projects of which some will have had the game almost ready for a year already (not to mention the code behind the engine which could be very old indeed) and just gone under level making, testing and tweaking. If you think about the old Duke nukem forever analogy, if every project decided that when a new tech came out to redesign it never would come out

and as MrMez mentioned this is exactly the case with DX10(last week 10.1 was announced at Cebit along with a little discussion of DX11)
we will start seeing DX10 games coming along shortly Cryisis, UT3, patch for FSX (which im not sure why that is taking so long as surely MS want to get DX10 rolling as soon as possible, my best guess really bad driver issues in actually implementing DX10) and also the patch for Sup com

however i dont quite agree with the statement that Game development is behind Hardware development i dont see it as simple as that. firslty how can you program for a multi core enviroment if no multicores exist therefore game development will always be behind hardware. however it took Gpu hardware 8 months to catch up with Oblivion, and even people with qx6700's are still complainging of slowdowns in SupCom.

Now as Jack correctly put it things are begining to filter through quake4 call of duty2 , supcom, crysis, alan wake(later) and even the revised source engine so development is happening but its just delayed by the reasons pu forward

my own take is that anyone (myself included) wanting to build now should wait until some of this development comes to fruition.therefore getting 20% extra performance at no extra cost .im using Crysis as the basis of my build so im going to wait until that is out then compare benchmarks over a variety of systems and games (and other programs) to come up with a proposed build. whether it be AMD or Intel, AMD or Nvidia quad core or dual core, i will weigh it up when the time comes. if by then Dx10 still isnt getting decent results i may have to re-evaluate. but at least ill be able to make an informed desicion. and whilst the 8800GTX performs dx9 brilliantly it is still unclear as to how well it will perform DX10. now i dont want this to happen but it is very possible that 8800 will perform DX10 badly leaving people with essentially great DX9 cards

anyway im going off topic but basically development can only really start once a tech is in place. put it this way you cant design a house before you have a look at what the area you building on is like
March 19, 2007 9:46:52 AM

the next MS product in 2 years? and what kinda OS should that be?
March 19, 2007 9:49:18 AM

Some interesting results there, yes. Valves approach is a good one, albeit not the most elegant, as they mention. However, programs like VRAD dont run a GUI, and dont show anything other than text back to the user when running, so I'm not surprised to see a major speed increase there.

The ingame demo is also impressive, it has to be noted. Again, however, there may still be situations where the threading model will not provide the 100% speed increase you mention. You still have to wait for one thread to finish before other threads can use its output, for instance, and thats where the speed advantage is lost. So while yes, they have a fairly impressive gain over single and dual core systems, it wont be a consistant one. Not until they move to a purely fine-threaded approach.
March 19, 2007 9:54:15 AM

Quote:
my own take is that anyone (myself included) wanting to build now should wait until some of this development comes to fruition.therefore getting 20% extra performance at no extra cost .im using Crysis as the basis of my build so im going to wait until that is out then compare benchmarks over a variety of systems and games (and other programs) to come up with a proposed build. whether it be AMD or Intel, AMD or Nvidia quad core or dual core, i will weigh it up when the time comes. if by then Dx10 still isnt getting decent results i may have to re-evaluate. but at least ill be able to make an informed desicion. and whilst the 8800GTX performs dx9 brilliantly it is still unclear as to how well it will perform DX10. now i dont want this to happen but it is very possible that 8800 will perform DX10 badly leaving people with essentially great DX9 cards


im doing the same :)  crysis is the game im going for and alan wake so i dont wanna buy now even tho i have the money.. im just waiting for the release's and the new tech's
March 19, 2007 9:54:42 AM

You are quite correct there, assuming the threading method you use is suited to it.

You can code for a dual proc setup, but then you see the slowdown on a quad core system. Same as how a single core system can run games faster in situations because the core themselves are clocked lower in a dual core system than in a single core (3Ghz Versus 2x 2.2Ghz for instance).

As stated above, fine threading can accomodate any number of processors, while coarse threading means one whole core is dedicated, with no possibility of sharing its tasks with other cores to speed things up. And example would be physics processing... In coarse threading, one core does it all. In Finethreading, you could send chuncks off to however many cores you wanted to get the most out of the system.
March 19, 2007 9:57:27 AM

Actually, Linux is built to distribute load as efficiently as it can on up to 256 cores per machine - it is, however, usually built differently on single and multicore systems.
There is a slight speed penalty when running a SMP kernel on a single core system; however, the difference between a 16-core and a 256-core able kernel is only RAM use (the maximum number of cores the kernel can handle is set at build time, and that number uses 4 kb * max. cores)
Even on a non-multimedia system, load balancing is extremely nimble under Linux (I use the name for the kernel); you REALLY need to load all cores to the gills and then let the kernel escalate their priorities over that of the input/output to get sluggishness.
Personally I have trouble saturating a single core Sempron64 (7-8 med to heavy apps running simultaneously), and my dual core has never reached sluggishness levels of load (DVD backup, DVD burning, web surfing, instant messenging, office suite, webradio playback - all simultaneous, and sometimes several instances of each).
The same machine under winXP (eventhough I've stripped it of useless services, removed the FisherPrice look and wizards, and use either the very same apps or very close equivalents) would become sluggish at half the load - on top of that the CPU would get much hotter.

About threads: a very well made game would essentially dedicate a function to a thread, which would return results on query then keep running (never entering a WAIT state) with one master thread being the game's main loop and in charge of monitoring the threads - replacing in fact timings and resurrecting stalled threads. You could have several instance of a thread (say, enemy's AI, interacting with other AIs and the physics thread, and the sound environment's thread, and the graphics thread... When needed or prompted)
It is, of course, a different programming paradigm: instead of a game that runs one way, you have parts of a game each running on their own and interacting with other threads, either on self querying or on external prompting. No more timing problems and easier debugging: if a thread keeps hanging and needs frequent resurrecting, you know there's something wrong with it - yet the game doesn't crash.
The latter is actually taken from Minix, not Linux; the Minix kernel is much more modular than Linux and in that sense more robust and scalable. It does suffer from a non-negligeable speed loss, yet it is more stable. (which is saying something considering Linux's stability).
Multicore doesn't mean linear speed increase; thread management alone will drag speed down. It can however be made for different, more nimble programming with better capabilities.
March 19, 2007 10:00:38 AM

we will see, i think its gonna work out for em :)  it has to

yea i know it would not provide 100% and it cant clock as high as dual core but 70% would do fine and it would still be a nice jump anyway

maybe but we will just have to wait and see
March 19, 2007 10:04:19 AM

Interesting stuff. But not so surprising, considering Linux was based off a mainframe OS. 256 cores is quite impressive though, and I wonder how long till we get to that point?

XP might be slower on the same system, but thats still quite relative. Sure, to most users it might seem sluggish, but then most home users either dont run such a heavy load, or wont ever use Linux, so a comparison would be pointless. Thats not to say I'm slagging Linux off, of course.

(My own experiances with Linux consist of trying to use a friends machine without much success, and installing SuSE on my second machine for all of an hour, before forgetting the root password and reformatting and putting Win2K back on it :D  )
March 19, 2007 10:07:33 AM

Quote:
As far as I'm aware, fine threading can adapt to the number of cores, and dosh out threads to cores as-and-when the need arises.

As for the race and Deadlocks... thats down to bug squashing, not just making sure certain threads dont run with others. In fact, from what I can see, you're more likely to encounter them with the more threads you run, so the problem becomes proportinal to the number of cores you're running.

I could be wrong, so its best to double check this. But from what I've managed to make out from this, thats what I can see happening.

Also, whereabouts in the UK are you, out of curiosity? Reading-based loonatic here....


Norwich. Thanks, btw.
March 19, 2007 10:12:14 AM

multicore and 64-bit (as I guess you're referring to those) are completely different:
- using 64-bit essentially means that your app is coded with carefully chosen data types with well considered data ranges; at compile time, the compiler will make use of more efficient machine code to reduce the number of instructions (thus, cycles) needed to deliver the result.
- using multicore means that you're threading your application so that a process can run in parallel with another (basic, usually optimized for synchronously clocked dual cores) or independenlty of other(s) (advanced, most versatile)

Meaning that carefully programmed code can run as well as it can on 32-bit or 64-bit systems.
Yes, you can have single core 64-bit programs running and multi core 32-bit ones - on top of single core 32-bit apps and multicore 64-bit ones.
March 19, 2007 10:16:18 AM

My question was more to, for gaming will it be easier for software makers to code in 64 bit than 32, and will 64 give results? And if so what could we expect? Given todays standards of hardware?
March 19, 2007 10:17:38 AM

it sounds a little early i think.. but longhorn was the server OS
March 19, 2007 10:23:14 AM

From the little Ive read on it, it MAY be somewhat open sourced, but I could be wrong heheh M$ opensourced.....
March 19, 2007 10:25:00 AM

open source from microsoft :p  yea right haha :D  .. maybe they see vista was a flop and decided to make another OS :roll:
March 19, 2007 10:32:31 AM

Actually I think its becoming a brutal reality for good ol M$. Im also wondering, just like w2k was to xp, will vista be to vienna?
March 19, 2007 10:44:07 AM

I have to agree with V8 on this one.Dual core cpu's have been around for a couple years now,and are considered mainstream by todays standards.Game developers have had ample time and notice to get the ball rolling on the utilization of both cores of a modern day cpu.I also believe that the lack of games that utilize bothe cores is a simple matter of money.game developers just want the games out as fast as possible.And although most games have very good graphics,only a small percentage use both cpu cores.I also think though that we will start seeing more titles coming out shortly that do utilize both cores.Lets see what happens over the next few months.

Dahak

AMD X2-4400+@2.6 S-939
EVGA NF4 SLI MB
2X EVGA 7950GT KO IN SLI
4X 512MB CRUCIAL BALLISTIX DDR500
WD300GIG HD
ACER 22IN WIDESCREEN LCD 1600X1200
THERMALTAKE TOUGHPOWER 850WATT PSU
COOLERMASTER MINI R120
3DMARK05 13,471
March 19, 2007 10:50:25 AM

Again, going back to previous answers, the game engines used by a lot of games released today are more than a few years old themselves. Do you...

A) Take the risk of altering the game engine to fit around a multi-core system (Maybe the license wont allow for modifications like that, in order for the engine coders to release another version for even more money that does support it)

B) Do you program your own engine (Time consming and costly, especially if others already have something compatative in development already, and it means your next project will release a lot later due to the time spent on the engine instead) or...

C) Do you finish your current projects, and wait and see what comes along next? (Cheaper in the short run, but can take some time for such an engine to appear, and it might cost a lot. Plus you have to spend time learning it and THEN you can start aking the games)...

The cycle of development in the games industry isnt as fast as some people might belive. I mean, just LOOK at Duke Nukem Forever... :wink:

But yeah, things are slowly starting to turn around, and things are getting more and more focussed on the multi-core aspect. Just sit tight, and have some patience. Or else go become a games programmer yourself, then you wont have any reason to wait for what you want :D 
March 19, 2007 10:56:31 AM

I want uber multi threaded, 64 bit NEW idea games running on my, to be, quad 6800/Barcelona R600/8900GTX cf/sli
rig
March 19, 2007 11:01:34 AM

Well, Id Software's solution is to develop an engine, then release its source code; only the game's actual content is copyrighted (meaning you don't actually buy Doom 3 but a set of levels coming along supported binaries, but you can actully run it on another machine/OS)
This actually led to fast Linux/Mac porting and easier transition to 64-bit and SMP - one of the values of Free/Open source software is that, since others can see your code, you'd better make it look nice!
As such code quality from Id is usually passable to good upon release, and then it is very rapidly cleaned up and expanded - no wonder Doom 3's engine is now ported to any gaming rig you can think of and still running strong even in 64-bit SMP systems.
Get one code, compile/run anywhere: the wonders of Free software...

The difficulty of 64-bit and / or SMP coding is this: it's much harder to make use of ugly hacks and still see the software run as intended; as such, the code quality has to be much higher from the get go - which makes programming longer. However, it reduces support costs because cleaner code usually entails less bug-fixing.
March 19, 2007 11:05:01 AM

Aye, I fully support this method. If it means that the engine is gradually advanced and checked over time by people who use it, then that can only be a good thing.

OTOH, a lot of games run on the same engine as each other (The Q2 engine was widely used by a lot of games, Half-Life most notably) so it does go to show that with tweaks and codehacks, thigs can still be either as good or better than the orginal.
March 19, 2007 11:15:34 AM

So getting it right from the get go, then open sourcing, is a must. I think that hardware improvements, as well as OS's features (DX10) improvements go along, the software will HAVE to keep up/ spend the money on development of superior games
March 19, 2007 11:21:06 AM

One thing I've noticed in all these posts and in other threads on this multi core subject is how new everyone thinks it is.

Niether multi core nor multi thread is new.

25 years ago I was studying manufacturing process control and the digital management of said processes. We called this Real Time Programming and the only difference I see here is that instead of each independent process module running on a separate machine they now can run on a separate core within one machine. Where global variables were communicated on a shared access memory unit (think hard drive) they are now held in main memory. The essence and rigour of the design process to prevent clashes is the same now as it was 25 years ago - only the actual details of the hardware has changed.

In other words the knowledge of how to do it effectively, efficiently and with full utilisation of all resources is OLD HAT.

No excuses for not getting it right 25 years later.

Q
March 19, 2007 11:32:08 AM

Quote:
Linux is great, but underused. Vista allows for better multi threading and hopefully the next M$ product due out in 2 years, will even be better yet. One question I do have is, is it better to write (in general) multi or in 64?

why is vista better for multithreading?????? (i know what Linux lacks in terms of POSIX threads , but what does Vista have better??)
and why oppose multi and 64?? you can do both or none...whiver suits you better. besides, 64 bit is usually a compiler issue not a programmer's.
March 19, 2007 11:39:17 AM

Quote:
One thing I've noticed in all these posts and in other threads on this multi core subject is how new everyone thinks it is.

Niether multi core nor multi thread is new.

25 years ago I was studying manufacturing process control and the digital management of said processes. We called this Real Time Programming and the only difference I see here is that instead of each independent process module running on a separate machine they now can run on a separate core within one machine. Where global variables were communicated on a shared access memory unit (think hard drive) they are now held in main memory. The essence and rigour of the design process to prevent clashes is the same now as it was 25 years ago - only the actual details of the hardware has changed.

In other words the knowledge of how to do it effectively, efficiently and with full utilisation of all resources is OLD HAT.

No excuses for not getting it right 25 years later.

Q

While i agree with you that the basic principles are the same, multithreading is a bit more complicated. While a semaphore is enough to guarantee Data integrity on a single core/CPU machine, the fact that a core keeps data at the cache level makes the semaphore useless. Flushing the CPU cache is not possible on an x86 machine unless you use mutex and condition variable functions. Of course this is not the end of the world, and one can easily adapt his practices. But as i stated earlier, while multithreading is very much in use today (just check your task manager and ask it to display the number of threads in each process), graphics API evolves in a more performance oriented fashion (which is why OpenGl, a low level API, is still very much in use even after the scengraph API became widespread). For example, there are rendering engines that strictly forbid you to use a multithreaded code ("the refresh functions must be called from the main thread only").
March 19, 2007 11:48:33 AM

Its out of context, what I meant was, Linux is a underused system. And vista is better than xp
March 19, 2007 11:51:17 AM

my mistake.
!