Advent of Multi-Core Gaming..

wh3resmycar

Distinguished
im just pretty curious and excited. the quads have been out for over a year already. but we still aint seeing games that fully utilize all these extra cores. ive been seeing people saying that multi-core gaming is the future, im just wondering when will that future come?

i recently bought a new pc and put my money on a q6600 believing that when the time comes these quad-ass baddies will pull away from its dual-gun brethrens in terms of gaming performance (even though the duos can overclocked 1.5ghz higher :D).

i remembered crysis was suppose to be multi-core capable, but as it turns out, it isn't, which was a dissappointment because we couldve churn 10-20 fps if it did, probably.

know any up and coming game where we'll see multi-core utilization?
 

purplerat

Distinguished
Jul 18, 2006
1,519
0
19,810
I wouldn't be too concerned with "When will games use 4, 8, 16 cores". Currently the limitations are on the GPUs. Even if Crysis was fully using 4 cores it still wouldn't have made much of a difference in FPS because current graphics cards just can't keep up.
 

infornography42

Distinguished
Mar 28, 2006
1,200
0
19,280
Until multiple cores are needed, programmers won't bother coding for them. It is significantly more complicated to program for multiple cores than for one.
 

razor512

Distinguished
Jun 16, 2007
2,135
71
19,890
even crysis doesn't use multicore properly, when running the game, it hovers around using 55% of each of my 2 cores

second life does this also

as well as some other games

the problem with the gaming industry seems that programmers either don't want to make their games more multithreaded, or they don't know how.
and theres almost no way for the people who actually know how to get a job working for companies like EA

it would be better if video hardware had a similar upgrade schedule like consoles, no one likes upgrading every month or 2 in order to run the latest game (that becomes extremely expensive very fast )

another reason why console gaming is so popular, game developers are not able to make special system requirements, it just has to run with the given hardware, because of this, the games will always run lag free and they cant really compete with other companies in terms of graphics because the console wont do much better with out lagging
 

purplerat

Distinguished
Jul 18, 2006
1,519
0
19,810
even crysis doesn't use multicore properly, when running the game, it hovers around using 55% of each of my 2 cores

Doesn't the second part of your statement contradict the first part? If the game is using each core equally, isn't that what you want. In this case 55%+55% = 110% or more than a single core maxed out. Just because it's not running every core at 100% does not mean it's not working. It's just proof that the workload put on the GPU is much more than the CPU.
 

razor512

Distinguished
Jun 16, 2007
2,135
71
19,890
i tried graphing it with rivatuner and no matter what i did, i couldnt get crysis to use full cpu usage.

i have video converters that are single threaded but they will use 50% of both cores (winavi for example, their codec is extremely good but they have nor released a multithreaded version yet, while it will convert at well over 300FPS when using 50% of both cores, it could be faster if it used 100% of both cores,

MAYA has no problem using 2 cores, it will even use more than more than 8 cores if needed (seen it used with networked dual quad core (8 core) render slaves and it will scale between them well )

photoshop will also make good use of multicores and will use 100% of multiple cores if needed but for some reason, crysis doesn't want to use that much,

and it doesnt seem right that crysis and second life can both have the same CPU usage

crysis has a much better AI, much better physics, much better sound, much better everything

when compared to
yaysad5.jpg
second life

the videocard doesn't process AI and physics and other elements like that for crysis
SL has only extremely basic phyiscs and no AI

PS crysis runs smoother than SL :)
 

SpinachEater

Distinguished
Oct 10, 2007
1,769
0
19,810
I don't have UT3 or a quad core set up right now to check the usage but if you look at the benchmarks below from xbit labs, the UT3 engine seems to like quad cores.

I think it is on the horizon for games to start taking advantage of the cores but a problem is that 4 core utilization might get put on hold with all of this PC gaming drop out. If we keep seeing console-to-PC type of games like Halo 2 and Far Cry 2, there will be a delayed push to move into quad core technology.

Good point above though too...GPUs are still a bottle neck. That is why I feel that the old 65nm and newer 45nm quads have such a long life right now. The CPU technology is there and waiting but the GPUs and game developers need to catch up.

http://www.xbitlabs.com/articles/cpu/display/core2quad-q9300_9.html#sect0
 

purplerat

Distinguished
Jul 18, 2006
1,519
0
19,810
Um, isn't Second Life an online game? Do you not realize that of all the potential bottlenecks in a PC the network connection (especially to the internet) is probably the worst?
 

razor512

Distinguished
Jun 16, 2007
2,135
71
19,890



my pings are good and it doesnt have much to do with crysis having much better graphics and a higher FPS than Sl

SL is pretty much hundreds of games in 1

the only problem is that it can be considered to be the definition of poor programming

there is generally 10-15 server problems today on their end and when there not having server problems, there having a "rolling restart" where they constantly restart their servers to kick all of the idlers off

then when thats not happening, everything is generally fine.

just sucks when your in the middle of a team battle and they restart the server with out warning, or put a warning that the sim will restart in 30 seconds
 

jujubefruit

Distinguished
Jun 15, 2008
27
0
18,530




I play oblivion and I just found out there's a way to tweak oblivion to increase thread usage. Basically, in multi-processing, a process represent the application. A thread is a fork in the process to do parallel task. Each application has one process. Each process can have one or more threads. A process can have 1 or more threads. A process with zero thread is equivalent to a process with 1 thread. The atom of all "processes and threads" is a thread. In other words if you are writing a code for a thread it is possible that this thread can never be split to run on as multi-processing, thus the term "atom" thread. It is wrong to say and think "atom process", you must say and think "atom thread". Technically, a process can be multi-threaded and not exclusively single threaded. I dunno how today's view on multiprocessing is as taught in university, but I just gave you a decade old understanding of processes and threads as understood by multithreaded or multiprocess programmer.

Anyway, if you want to test oblivion and see if it is fast on quadcore cpu, you need to download users made utility to tweak oblivion. You can download here: http://www.tesnexus.com/downloads/file.php?id=2780

Go throught the ui that comes with it, and you will find options to increase threads usage. there is a specific checkbox that says "increase number of cpu threads" , another checkbox that says "increase game threads usage", another checkbox that says "enable background path finding". and another unrelated checkbox that says "increase pre-load buffer for >2gb machines".

Cool huh? I am unable to test it since I add another 80+ mods (total 180+). I am currently scanning the readme to determine load order. Painful times, but then I can fly the flying dragon again.--simply amazing view from the sky...
Yes, you summon a dragon, mount the dragon, and take-off to the sky all in 3-D (flapping wings too). This time I hope to be able to throw a fireball at the lurking monster below. This is done with user made mod in oblivion.

I am humble by this game. It is very hard for me to go back to the good poor old days when I can update my system every 18 months... Today, I buy cheap cheap price upgrades every 24 months. Yet, this oblivion game gives me the urge to splurge on a super system, premium price items that lose their value in six months.

jujubefruit





 

sportsfanboy

Distinguished
crysis does a pretty good job with multiple cores... I know this because I use core temp to check the temps on the cores after playing in a hot room.
The cores warm up pretty evenly (q6600), as oppossed to some programs that are single or two thred applications.
 
Game develop takes years, I mean years.

It's not as simple as saying "Okay let's make all our games optimized for multi-core CPUs by noon tomorrow. Even now most games are not even optimized for dual cores. Yes, games can take advantage of the second core, but the coding hasn't been optimized yet.
 

jujubefruit

Distinguished
Jun 15, 2008
27
0
18,530
Optimized or not, if it can fork, it can already take advantage.

To be able to do a fork is a fundamental code instruction existing in the current software release. How can you have an option for increase multi-thread without fork? How can you writing or include code to fork or to create a thread in existing software release without planning for it?

Symmetroc multi-process programming is not new. It exists when unix was born. I idea of using or including multi-process code like fork can not just suddenly appears in existing software or existing game engine. Like you said, game develop takes years, the option to enable increase threads mean option to allow current code to do more forking.

If such an option is available for your game, do you think the years have already been checked off?

Optimized code, optimized for what? What does optimized for dual cores mean? Are you saying you need separate code for dual cores optimized? another separate code for quad core optimized? another separate code for unicore optimized?

If you answer yes to anyone of the above, then you need a separate system software for each separate optimized code. That's layman's term for you need a windows optimized for dual core to run optimized dual core code. And you need a windows optimized for quad core to run optimized quad core code. SEE?

Basically, if a game gives you option to increase threads, it means the os can load balance threads to each cpu. The only way os can load balance threads to each cpu is if the application can fork more threads. The only way the application can fork (create) more threads is if the user take advantage of existing software option to increase threads. SEE?

A process or a thread is nothing more that list of instructions for the processor. Each thread can execute and finish independently from each. It doesn't matter if the threads were forked from the same parent process (singular) or thread (singular). It doesn't matter if the threads comes from different parents. All threads can execute and finish independently from each other. IF two threads depends on each other than it is by design of programmer. SEE?

So what does optimized mean by you? You are wrong to say any games can take advantage of second cpu.

Only the games that can do fork can have more threads. IF a game doesn't fork, it's code is executed sequentially, even with all the compiled time link libraries, even with all the run time dynamic link libraries, even with all the libraries of the universe, and all all the bad ass super programmers and game designers. The game gets executed sequentially (single core style). SEE?

More threads means more cpu time in a multi-tasking os on a one core processor. More threads means more cpu time and thus get to be load balance by os on multi-core processor. This is generic info, specific info comes when you read msdn documents for NT or vista. For unix, you have to read programmer's manual from vendor of unix flavor os, best just to ask vendor software engineer. For linux, you have to read source code, or faq out there somewhere. In summary, you want specific, you need to know specific to begin your discovery.

Or be like me, try to understand fundamental computing models. I still don't call myself a guru, but I try to pickup every computing models to exist on planet earth. I been doing it since playing games on apple iie computers.

jujubefruit
 


You are right. In almost every area. Unless you hav a tri SLI or CFX going the CPU is not the bottleneck. And in a way Intel is right that the GPU as we know it is "dead".

I don't think Intel was trying to say that the GPU all together is dead since Intel themselves are making one, Larrabee. I think what Intel means is that even with all the advancements, improvements and extras (take ATIs next GPU that has 480+ Stream Processing units) the performance is not what it should be. I think Intel is trying to say that we need a major change in the GPU arena and thats what they aim to do with Larrabee.

If Intel is succesful it may be a change for the better. I mean if you could take the sheere speed of a CPU and get rid of everything and make it just straight GPU style in order code running and maybe Physics, could you imagine the performance from that? Is a 800MHz GPU core gives us 40-50FPS in intense games what would a 3GHz GPU core do?

As for CPUs being fully utilized, this will not happen anytime soon. Heck a P4 3.GHz w/HT is just now being utlized to its maximum. It will tak 2 years before we see a Q6600 being fully utilized that is unless software companies finally decide to go at the pace of hardware technology.
 

imnotageek

Distinguished
Jan 4, 2008
150
0
18,680


Hi Jujubefruit, I am not sure the validity of what you wrote but it does seem to make sense to me. Thanks for sharing. Your post kind of bring me back to my University days of parallel computing back in the early 90s.
 

infornography42

Distinguished
Mar 28, 2006
1,200
0
19,280


Isn't this a bit like Yugo saying that BMW and Porche are doing it wrong and they'll show em.

Intel isn't exactly known for making good GPUs. If they do make one that proves them right, more power to them, but I'd really be surprised.
 


But what Intel makes are technically not truly GPUs as all they do is use the CPU to do the prrocessing of the video. Thats what people miss a lot. Larrabee is not going to be like their IGPs. I myself am not saying that they are doing it wrong but I think there is something wrong since the last few generations of GPUs have not been giving as much as an increase in performance as I would expect.

To say the least if you want to compare car wise, Porche must be doing something wrong. For the price of a Porche you can normally get a Z06 Vette that will smoke it easily and still have money left over. But I prefer american cars anyways.

I just hope that Intels Larrabee does shake the market up. Right now nVidia is kinda hovering around the same design. I mean they somehow were able to market a 9600GT which was the same as a 8800GT and yet people bought it even though the performance increase was not worth it.
 

razor512

Distinguished
Jun 16, 2007
2,135
71
19,890
optimizing for multicore means getting the programs to make the most of the 2 cores.

look at programs like MAYA 8.5

great 3d modeler older versions used to use multicore but in most cases it would only use 50% of each core,

now for simple things it used 100% of both cores and gets the work done in half the time for almost anything it does, it tries to fully use both cores to get things done faster

in crysis if you spawn lots and lots of enemies (requires lots more processing work to handle the AI)

the game still hovers around 50% cpu usage

crysis just is not optimized for multicore

there's also a reason for this, game developers are under crazy deadlines and optimizing for multiple cores is one of the most difficult things to do

it will take too long

with most professional programs, the developers will take their time and push back release dates as needed there in control of the release date as it is better to have a fully optimized and stable program

then to release a buggy one that will cause you to have many angry customers banging on your door with their lawyers
 

radnor

Distinguished
Apr 9, 2008
1,021
0
19,290
I don't see Multi-core CPUs as a revolution in gaming. A small step for now.
Hardware these days are very ahead of software. I mean in this easily contested sentence, that software needs to be rewritten.

SMP and Hyper threading paved the way for Multi-core. But even HT and SMC processing have completely different behavior from Multi-core. But if they never existed, in your Quad, you should see 1 process/aplication per core. Most apps would be, definitely single threaded. Real boring no ?

See it like this:

x64 is here for a long time now, and only now we are starting to pinch it. And it is a breakthrough. its saying the processor will read the DOUBLE. Now if we don't see a performance leap, its because the software isn't optimized. And mainstream software will take a few more years to adapt. To have it in perspective, we have a Windows XP 64 for some years now, silly Vista still has a 32bit Build, and Windows 7 will have a 32bit build also !!!!!

And by god its a huge LEAP (x86-x64) in computing power. And its taking a hell of time..........................zzzzzzzzzzzzz

A bit of history on the First x86-64 CPU.

AMD64 was created as an alternative to Intel and Hewlett Packard's radically different IA-64 architecture. Originally announced as "x86-64" in August 2000, the architecture was positioned by AMD from the beginning as an evolutionary way to add 64-bit computing capabilities to the existing x86 architecture, as opposed to Intel's approach of creating an entirely new 64-bit architecture with IA-64.

The Athlon 64 was originally codenamed ClawHammer by AMD, and was referred to as such internally and in press releases. The first Athlon 64 FX was based on the first Opteron core, SledgeHammer. Both cores, produced on a 130 nanometer process, were first introduced on September 23, 2003.

End of history.

September 23, 2003. Almost 5 years ago. Apps taking any "real" advantage of Many Core or Multi-Core will take a sh*tload of time. Hope i helped and i didn't make ya depressed.

 

infornography42

Distinguished
Mar 28, 2006
1,200
0
19,280
Well what is holding back 64 bit is Microsoft. Blame rests squarely with them.

First up they should have made XP-64 more functional. Its utter lack of driver and software support was ... problematic. Then Vista came out and not only did it still offer an option for 32 bit which is could easily have completely done away with, but it also sucked so badly that people were downgrading back to XP.

Until 1 generation after the first widespread and successful 64 bit OS, we will not see developers making 64 bit software simply because they can code for 32 bit and make it run on both, but it won't be optimized. Until the haves outnumber the have nots (speaking of 64 bit OSs) the developers won't see it as a worthwhile effort to code an exclusive product.
 


Driver support is not soley MS job. Its also up to the companies that make the hardware to code for it.

Same goes for Vista. A lot of the issues in the beginning were driver related due to companies not really optimizing the drivers for Vista. Luckily most of those issues have been resolved.

To say the least there is no true use for x64 except when you need to address more than 4GB of memory. Although x64 does offer a better system and probably would make better use of the CPUs power its not so easy to just drop 32bit support. Intel and HP tried with IA-64 but were not sucessfull.

It like 16bit. There stil is support for 16 and even 8bit on modern CPUs. When 32bit was introduced it had to be compatable with 16 and 8 bit in order to allow people the time to change to full 32bit. We are now fully 32bit but the thing is that people still use some older 16 and 8bit software even in servers.

To say the truth the reason why we don't progress at a much faster rate is because of us. When we can all just drop our old machines and buy new ones is when we will go to the next level without any problems.

As for multi-core its a bit different. Yes it is harder to code and optimize but thats should not stop companies from trying to get there. What would you buy first? A 3D rendering program that fully utilizes all 4 cores on a quad core or one that just utilizes 2 cores on a 4 core CPU? Most will go for the previous program especially in 3D rendering as that saves time and time is money.

So for the advancement of multi-core gaming/programs it is them who are holding us back. Of course there are game companies that are trying to push it such as VALVe and CryTek but its not easy since they are also trying to make their games run on systems that are much older. Well at least VALVe is as most of their new games can still run on DX7 compliant hardware and single core CPUs. I can't say the same about CryTek as Crysis will not run on DX7 hardware and will probably choke on an older single core CPU.
 
I think adding better scaling with multi gpus within games and solving the afr micro stutter and even the possibility of multicore communication between gpus is by far and away a much better approach to seeing real improvements than hoping for a cpu that doesnt really add that much to any game as far as rasterization goes. It still all has to be done on a gpu. If raytracing ever becomes a decent potential, itll only have a few advantages in the overall processing vs rasyerization. The majority will still rely upon rasterization. Raytracing will help, and blending both, if it can be done, would be the best performance , but muticore cpus wont add as much as people hope. Only thru raytracing, and as Ive said, its limited in what it can do