Dissapointing dual cores

P4Man

Distinguished
Feb 6, 2004
2,305
0
19,780
If there is any truth to <A HREF="http://www.tomshardware.com/hardnews/20050411_132505.html" target="_new"> this article </A>, I'd say you the future for intel cpu's looks bleak. For the next 12 months, they will be stuck @3.2 GHz for smithfield, 3.4 for the EE, and Yonah, will not increase clockspeeds beyond what current Dothans offer (ie 2.16 GHz) either.

On the ultra high end (desktop), Pentium D EE 3.4 should save intel some face in benchmarketing, but if AMD does indeed pull a 2.4 GHz dual core A64 out of its hat, even the EE will be beaten silly on just about every app out there. Not that anyone really cares about either chip though.

On the high end, Intel is going to be beaten silly on gaming benchmarks and other single threaded code, while most likely making a very good show on multithreaded, multimedia benchies. Pretty much as has been for a while, only with larger differences in both. If/when AMD does release lower clocked DC chips (1.8-2.2), which I doubt until 65nm, Intel will most likely be trounced in just about any benchmark here again.

In the mobile market, Dothan currently reigns supreme, but most likely will not improve much until Yonah shows up. How Turion pans out on 90nm is anyone's guess; mine is it will be pretty close performance wise, TDP wise, but have less real world battery life. OTOH, it will be 64 bit enabled, not a bad trumph once windows64 is upon us, and easier to market than improved battery life.

Once Yonah ships next year, intel will have a very marketable advantage (dual core), but this chip will by that time be a pretty poor single threaded performer (still being at <=2.16 GHz), and just as poor running multithreaded code batteries, 32 bit only and with a not so terribly low TDP, hardly an option for thin and light. Not my idea of an Uberchip.

On the server front, thinks look worst of all; with no dual core Xeons scheduled for this year, intel is going to hurt seriously. Opterons already pretty much outperform Xeons in most configurations, only IBMs ultra expensive X3 chipset saves them some face in the 4 way market. Dual core will add over this, an enormous advantage, either performance wise per socket, or price wise, offering 4 core opteron systems for prices only a fraction (<1/3!) of otherwise comparable 4 core Xeon systems. Not too mention if single core opterons will gain a huge advantage: upgradeability. Don't underestimate that, many 4 way servers are purchased with just 2 cpu's, only to be able to add 2 more later on. Consider 4 way "ready" systems easily cost 3x as much as comparable 2 way only systems, so upgradeability is worth quite a few bucks here.

I don't know if there are any intel investors on this board, but I would consider getting rid of their stock right now.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
 

P4Man

Distinguished
Feb 6, 2004
2,305
0
19,780
>For the same price dual old CPUs beats the same priced faster
> CPU hands down.

Depends.. some (photoshop) operations benefit greatly from dual cores (cpus), almost up to 100%, some only a bit, and still some, none at all. In short, I disagree with your statement. I'd wager a guess a dual Xeon ~2.4 GHz (or AMP 2400+) will be beaten on >90% of the PS operations by a P4 3.6 / A64 3700+ and I'd guess prices won't be that far apart.

Now if you'd have used rendering as an example, i would have agreed.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
 

P4Man

Distinguished
Feb 6, 2004
2,305
0
19,780
also..:

>That's good news actually. It means the CPU market has
>finally stabilised and users can save more cash in their
>pocket by avoiding costy upgrades.

It would also mean current high end cpu's will not drop much in price as they would still be high end 12 months from here. So thats not good news for cash starved consumers either, unless they already spent their last change on a new system :)

= The views stated herein are my personal views, and not necessarily the views of my wife. =
 

P4Man

Distinguished
Feb 6, 2004
2,305
0
19,780
>Hmmm... I should've been more clear about that statement.

Oh but you were very clear, you used Photoshop as an example. Its a good example too, as it will show anything from 0% to nearly 100% speed increases depening what operation or filter you use. Unfortunately is rarely close to 100% for the things I use it, and in general its closer to 20-30%. But if you use degauss filter all day long, money for a "slow" dual setup would be well spent.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
 

Barley_Mcflexo

Distinguished
Mar 20, 2005
19
0
18,510
wusy, please humor me for a moment. Why would one not game on dual core chips? Will games never take advantage of the extra core? I'm not being a smart-ass, I'd just like you to expand on your statement. Thanks!
 

slvr_phoenix

Splendid
Dec 31, 2007
6,223
1
25,780
Quake 3 and Doom 3 do take advantage now. More will in the future. (Especially if dualcore CPUs catch on, which they will. Dell, HP, and all will tout them as great and people will believe them.) The market will shift enough to make multithreaded software worth the enormous effort and increase in cost to produce and maintain.

But to better answer your question, software that is single-threaded is much easier (and therefore cheaper) to write and debug, so it has been the standard, because unless you have two (or more) CPUs, single-threaded software will run best on your machine anyway. It was a win-win situation since the vast majority of the market was single-CPUed.

But now that the market is shifting to multiple CPU boxes (and especially now with dual core CPUs) writing the much more laborious and costly multi-threaded software that can run on multiple CPUs at the same time (where as single-threaded software can't) will become more and more common because more and more people will be able to benefit from it on their dualcore computers.

So for the most part gamers don't benefit from dual core now because most games can only use one core, but they will benefit in the future if dual core catches on.

<pre> :eek: <font color=purple>I express to you a hex value 84 with my ten binary 'digits'. :eek: </font color=purple></pre><p>@ 185K -> 200,000 miles or bust!
 

Clob

Distinguished
Sep 7, 2003
1,317
0
19,280
How many people can encode a DVD and play a game at the same time on a single core A64? This is the concept that compells me to go the dualcore route. That and Not needing a double socket motherboard.

"If youre paddling upstream in a canoe and a
wheel falls off, how many pancakes fit in a doghouse? None! Icecream doesn't have bones!!!"

"Battling Gimps and Dimbulbs HERE at THGC"
 

P4Man

Distinguished
Feb 6, 2004
2,305
0
19,780
>How many people can encode a DVD and play a game at the same
>time on a single core A64 ?

I can do that on my old AXP.

And you could do it on a Celeron/Duron; all you need to do, -if the encoding app doesn't do it for you- is reduce thread priority in the task manager. Then, and if you have enough RAM, you will practically not notice the encoding in the background while you play.

That said, the encoding will not go very fast if another cpu intensive app (like the game in this example) consumes most CPU cycles. A dual core chip would not provide any significant performance boost to the game, but it would seriously speed up the background process. Personally, I don't care much how long it takes as I don't have to wait for it to finish anyhow, but YMMV.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
 

P4Man

Distinguished
Feb 6, 2004
2,305
0
19,780
>Why would one not game on dual core chips?

I assume he meant to say, people should not buy duallies if it so to play games on, as they won't provide any tangible benefit for games in the next few years. In fact, for the money they will most likely perform (considerably) worse (depending what price point you are buying at), and in absolute terms, if price is not an issue, due to thermal limitations SC variants will simply be faster (thanks to higher clock) than DC chips. Might or might not change with 65nm, but its definately going to be true for Pentium-D and dual core A64s. Therefore, if gaming is your main concern, buying dual core makes no sense for now.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
 

P4Man

Distinguished
Feb 6, 2004
2,305
0
19,780
>I personally am not a programmer hence I wouldn't have the
>slightest clue how much more work is needed to make a game
>use multi-threads.

I'm just an old school hobby programmer, but I assure you it is indeed hard. depending on the type of program, extremely hard. Worse, its not just difficult, sometimes its simply impossible to extract better performance by splitting up your code in independant threads, since the required overhead to synch between the threads can be orders of magnitute greater than the potential performance increase. No ammount of hard work is going to help here.

Now I'm willing to believe game engines do not fall in this category, as I can see some potential for splitting up AI, 3D and physics but the potential is clearly not unlimited. If one day, for instance AI would be the dominating workload on the CPU, while 3D and physics are being entirely offloaded to dedicated hardware, I can imagine scenario's or algorithms where the potential might eventually drop to close to zero again.

An often used analogy is creating a program by a team of programmers. you can add as many as you want, but you'll get diminishing returns with each added developper, up to the point where you get a negative return, and the development will actually take longer than if you would have sticked to a smaller team. Obviously, creating a huge app like an OS would benefit from a much larger team, than for instance writing an audio codec, but in both cases the talent/speed of the individual developper can never be offset by just adding more people to your team.

If you apply this to multicore, looking in my crystall ball, I'm sure 2 cores will eventually be put to good use on the desktop, 4 cores maybe.. but for most desktop apps, other than highly paralizable workloads (like rendering or encoding) anything more will simply not help IMHO, regardless of how hard developpers will try.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
 

endyen

Splendid
We seem to be forgetting about 64 bit for games. That is another route that will require a great deal of extra coding, but with more tangable results.
Given the choice of coding for 64 bit, or coding for dual core, most would choose 64 bit.
Yes, the two are mutually exclusive. There is no practical way to debug for bothe at the same time.
 

P4Man

Distinguished
Feb 6, 2004
2,305
0
19,780
>We seem to be forgetting about 64 bit for games

No, not really.. its just unrelated.

>That is another route that will require a great deal of extra
> coding, but with more tangable results.

Just the opposite really: it hardly requires any coding effort at all, basically, its a recompile, which is orders of magnitudes easier and less work than optimizing for multicore which requires you to rework the source from the ground up. Basically, toss it away, and start all over. Well not only do you have redo those parts, you have to do them in a significantly more difficult way.

OTOH, the potential performance benefits of going 64 bit are IMHO quite a bit less than what you could gain from properly supporting multi core. As I see it, 64 bit will give a small to reasonable performance boost, but will mostly ensure games can keep advancing the way they have these 5 last years. MT has at least the potential to double (maybe quadruple one day) cpu bound performance.

>Yes, the two are mutually exclusive. There is no practical
>way to debug for bothe at the same time.

Hu ? What makes you think that ?? Of course they are not mtutally exclusive, multithreaded 64 bit code is being developped every day, and has been for decades. Just not for the desktop.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
 

endyen

Splendid
First off, a "simple recompile" will give none of the advances you speak of. To get those requires a rework. True that is less troublesome than compiling for MT, but it is a lot of time.
As far as the MP code that is being moved to 64 bit, that is a different story. Most of the bugs have been worked out, so it does only require a simple recompile. Any resulting bugs would be atributable to that.
As far as gains go, a 30 to 40% gain from 64bit is anticipated, with about a 20% increase in coding.
Games could have taken advantage of muti-threading, had there been the type of gains you seem to think are there. The very nature of gaming suggests that paralelism is not that usefull. Certanly nobody doubts that the degree of difficulty for coding games to MP is exponential.
Perhaps you are right though. We shall just have to wait and see.
 

P4Man

Distinguished
Feb 6, 2004
2,305
0
19,780
Endyen, you confuse me here. Normally you seem like a pretty intelligent guy, but today it seems you're not.. well.. quite yourselve.

>First off, a "simple recompile" will give none of the
>advances you speak of. To get those requires a rework.

If the code is 64 bit safe, which any code produced in the last half decade by any half competent programmer really should be, it really only is a recompile. Okay, now in reality you'll probably have to review parts of the code, change or add some variable declarations, perhaps modify a boundary check here and there if you messed up or cut corners on the inital code, maybe rework some critial loops you handcoded in assembler.. but a monkey could do all that. I'm sure you remember IBMs claims it ported DB/2 to AMD64 in only a couple of days ? Just *compiling* such a huge app would take longer on any of our machines. Heck, compiling Open Office takes nearly half a day on my machine.

>As far as the MP code that is being moved to 64 bit, that is
> a different story. Most of the bugs have been worked out,
>so it does only require a simple recompile

Hu ?? what on earth makes you think MP code would be de facto 64 bit safe, and SP code would somehow not ? Again, there is no relation between SP/MP and 32bit/64bit code whatsoever. None, nada. Unless you meant to say SMP code is likely "newer" and therefore better chances of being 64 bit safe, but that is hardly an argument in this context.

>As far as gains go, a 30 to 40% gain from 64bit is
>anticipated,

Wohaa.. seems like a *very* high estimate to me, and I have a reputation for being one of the most verbal advocates for 64 bit computing here. No way that number is going to be anywhere near representative, unless in those cases where the 2GB limit is being hit already, and coders have to resort to PAE hacks. There you could see that sort of speedups, but on general code, or games ? No way. Divide that number by a factor 2, and I'll be fairly impressed already. 64 bit is not nearly as much about speed than it is about ability to do things you can't with 32 bit.

>with about a 20% increase in coding.

What do you mean "increase in coding" ? Source code remains the same, the ammount of work to produce this code remains the same, only that your code obviously needs to be 64 bit safe. The only thing that increases slightly, is compiled code size, IOW, code density decreases slightly, a minor potential performance decreasing factor. But from what I've seen, the (compiled) code bloat is around 5-10%, and the resulting performance drop (since you effectively reduce memory bandwith) almost unmeasurable.

>Games could have taken advantage of muti-threading, had
>there been the type of gains you seem to think are there.

I said the <b>potential</b> is there. Leveraging it is <b>entirely</b> different story altogether, but best theoretical scenario indeed is that cpu bound gaming code would gain nearly 100% speedup from supporting dual core.

Best case real world (cpu bound) speedups will probably be closer to 30-40% I guess, giving an effective speedup of your FPS which is even less, since there are other bottlenecks. So it will be interesting to see to what degree game developper will go through the trouble and have the ability to tap this potential.

Just to put this in context, handcoding everything in assembler arguably has potential for far greater speedups, but some critical loops aside, no one does this because its way too hard and not econmically viable. Creating MT code for games sits somewhere inbetween I'd say, where exactly, we'll have to see.

>The very nature of gaming suggests that paralelism is not
>that usefull.

As has been said so often, splitting AI, 3D and physics seems like a rather natural thing to do. Now I have no idea of the respective CPU load these components result in, and if one of them needs like 90% of the cpu time, its going to be really hard to extract any benefit from SMT. If these components (future threads) all require similar ammounts of CPU time,and not too much synchronisation overhead, indeed we could see some fairly spectacular speedups one day.

>Certanly nobody doubts that the degree of difficulty for
>coding games to MP is exponential.

Agreed, though again, its not only a matter of being "hard". In some cases it is simply impossible to extract more performance. Anyway, like I said, I don't think this applies to game engines as a hole, but for all I know, it might apply to AI or physics or 3D.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
 

FUGGER

Distinguished
Dec 31, 2007
2,490
0
19,780
As clueless as ever.

You just love to spew garbage to your built in lemming audiance. The sad thing is they believe some of the crap you spew.

Lets hear about those x86-64 apps and how they make your job easier and all the cool stuff you can do now.

arbitration on the dual core products will give us lower latency and steps closer to VT technology.

FX-57?? try Q3 2005

If you want to see toledo, check my site as it was recently posted with benches.

<A HREF="http://www.xtremesystems.org" target="_new">www.xtremesystems.org</A>
 

P4Man

Distinguished
Feb 6, 2004
2,305
0
19,780
>As clueless as ever.

I've been accused of many things in my life, but not often of being clueless, or at least not by anyone actually having a clue themselves. Why don't you share your insights and point out my cluelesness ?

>The sad thing is they believe some of the crap you spew.

True, though its probably because most of my crap tends to be correct.

>Lets hear about those x86-64 apps and how they make your job
> easier and all the cool stuff you can do now.

aah.. but I thought I was the clueless one, so why should I clue you in ? why don't you explain me why its useless, if that is what you think it is ? Enlighten us, oh Clued One.

>FX-57?? try Q3 2005

Did I ever say anything else than 2H05 perhaps ?

= The views stated herein are my personal views, and not necessarily the views of my wife. =
 

sonoran

Distinguished
Jun 21, 2002
315
0
18,790
First off, a "simple recompile" will give none of the advances you speak of.
A simple recompile would provide the ability to utilize the additional program registers, which we've all seen can be a significant performance booster in itself.

The very nature of gaming suggests that paralelism is not that usefull.
I'm betting you're wrong on this one. First, DX9 is already multithreaded to some extent, so any game that utilizes it already uses some multithreading code. Second, the code libraries being used by game coders are already being enhanced to utilize mutithreading. And imagine if you will, a game that has a shared memory space (perhaps an object running in its own thread) to describe the "world" of the game. Now imagine creating each creature in that world as a completely separate object, each running in its own thread of execution, that interacts with that shared world. Every single animate object in the game could be a separate code thread. Now you're talking serious multithreading...
 

Mephistopheles

Distinguished
Feb 10, 2003
2,444
0
19,780
Now imagine creating each creature in that world as a completely separate object, each running in its own thread of execution, that interacts with that shared world. Every single animate object in the game could be a separate code thread. Now you're talking serious multithreading...
Unfortunately, while this <i>is</i> serious multithreading, this would also be <i>extremely</i> thread-<b>un</b>safe, unless the programmers are geniuses. I mean, each global variable can be altered only by a single thread in classical safe code. If more than one thread can change a single global variable, you've got extremely nasty code. Thread interaction is very complicated business...

Mind you, it could be done, but it requires tremendous effort... at least from what I know. Multithreading is difficult.

Anyone else have more expertise on how to multithread?
 

ChipDeath

Splendid
May 16, 2002
4,307
0
22,790
I'm not great at multi-threading, but have done a very small amount in this area..

AFAIK, you need to completely separate the threads... For example, having a separate thread for each monster sounds great, until you get to a situation which requires two monsters to interact (They want to hit each other, they both want to open the same door from different sides, etc). Then even if you can somehow cope with that, you'd be likely looking at a slight <i>reduction</i> in speed over letting just one thread handle it all, as the two threads would need to 'talk' to each other until the 'shared' bit is resolved. This just isn't going to happen :eek: ...

A better situation would be you going through a level, whilst a computer-controlled version of you is going through a completely separate copy of the same level, as some form of race or 'par' time or something. Something like a ghost car on a racing game for instance - what you do can't effect it, and what it does can't affect you...

A far more likely real-world situation is where one thread looks after background apps (software firewall and the like), while another just runs the game (or whatever).

So it could have applications, but they are always going to be more limited than people realise.

---
Winnie 3200+ @ ~2.5Ghz, ~1.41V
1Gb @ 209Mhz, 2T, 3-5-5-10
Asus 6800GT 128Mb
 

P4Man

Distinguished
Feb 6, 2004
2,305
0
19,780
Each "monster" spawning its own thread is indeed a unlikely example.. just what would that thread do ? The 3D rendering is not limited to just one monster, makes no sense. Physics.. possibly, but the monster is going to interact with other objects or creatures as you pointed out, synching between them, the overhead could be bigger than the benefit. AI however, should be quite doable.. each monster 'thinks' for itself, in its own thread. There would very little, if any thread synchronisation problems. But just having AI "globally" in a seperate thread would be better and simpler, since its doubtfull spawning as many threads as you have AI creatures will increase performance, as you will be duplicating far too many things..

brw, just imagine playing Doom7 and having to upgrade your octal core machine with a 16 core one to be able to play on "hard" settings with more than 6 simultaneous enemies :)


= The views stated herein are my personal views, and not necessarily the views of my wife. =
 

Mephistopheles

Distinguished
Feb 10, 2003
2,444
0
19,780
So it could have applications, but they are always going to be more limited than people realise.
Absolutely. That's what I meant. The adequate resolution of shared information amongst threads is of great importance for this to be of any benefit. Thread interaction needs to be worked on.

That said, I do think that the wide availability of multithreaded code and multicore CPUs might change the overall programming paradigm by a lot. In games and in 99.99% of current software, noone ever bothered about threads, but now, we might just be seeing a hell of a lot more programming tools, libraries, tricks and so on regarding multiple threads. There's a whole plethora of tricks that can come to mind, not just AI-related; we can rest assured that they will all be attempted at some point or the other in the next several years. Programs will become smarter, in a sense.

There has not been a single change in the programming world so powerful as this one in a very long time!

You get the picture, though.

Maybe you'll need to have the capability of collapse threads on to a main game world from time to time, much like in quantum mechanics, for instance. Each element would advance in time and be processed by the CPU completely in parallel, and player interaction would be a sequence of thread collapsing and then instantaneously and seamlessly start being processed all over again with the collapsed data. Would be a smart approach, mind you... And it would be massively parallelizable...

Who knows... In the next years, there will be an unprecedented number of brains and think tanks turned to the question of thread interaction in multithreaded code. Maybe gains will be limited, but things might turn out just a little better than expected.

I for one would be rather disappointed if that weren't the case...

(OMG, P4Man was right, I <i>am</i> an eternal optimist. Maybe I actually needed that title? I'm not an "eternal poster", I'm an eternal optimist... sounds better... :smile: )
 

slvr_phoenix

Splendid
Dec 31, 2007
6,223
1
25,780
this would also be extremely thread-unsafe, unless the programmers are geniuses
The question is, does it really matter if they're thread-unsafe? If you program it right, it may not matter. It'd be an interesting experiment. The flaw however would be that your CPU(s) would always be maxed out and the actual intelligence of the 'monsters' would scale directly with the proc time that they get. So people with better systems would see better AI, but the more 'monsters' that come into play, the dumber they all get. :O

What it could really do though, if done well, is revolutionize RTS style games. If each unit was it's own thread and you used shared memory pointers for units to resolve their own attacks on either other, that could get really wicked.

<pre> :eek: <font color=purple>I express to you a hex value 84 with my ten binary 'digits'. :eek: </font color=purple></pre><p>@ 185K -> 200,000 miles or bust!
 

Kelledin

Distinguished
Mar 1, 2001
2,183
0
19,780
The question is, does it really matter if they're thread-unsafe? If you program it right, it may not matter.
There's no "right" way to do what you're thinking. Each NPC AI generally has to interact with the 3D world in some fashion, at the very least by reading its state (and the state may change at any time in a multithreaded model). So mutexes/semaphores have to be used, which is another level of resource contention that could screw with the user experience. And if, say, you forget to lock or unlock a mutex near some critical point, your game will likely either hang, crash, or otherwise misbehave in ways that are hard to reproduce or debug.

Never mind dealing with the fact that an executing thread may die unexpectedly through no action of its own (thread cancellation by a parent) and how to arrange cleanup operations for that sort of thing. Worse yet, those all-important mutex/semaphore locks are often part of the cleanup, so memory leaks are the least of your worries.

Multithreaded programming is just too fragile.

Still, there's obviously room for it in games. Sonoran's example is a poster-child, but relevant nonetheless. Currently, your typical single-thread game handles NPC AI by cycling through a list of objects defining each NPC's state every time it has to update the state of the world. It can adopt its own language to signify limited NPC states and actions and retain complete control over NPCs, or actually pass control to a callback function for each NPC for maximum flexibility--but without preemptive threading and the ability to cancel threads, it has no recourse if an NPC callback decides to be a hog and run forever.

Threading is ideal, but context switching can kill performance if you're trying to multithread on only one logical processor. So threading some fifty NPC AIs isn't feasible, never mind "dumb" objects like elevators, doors, springboards, rolling smoke, rippling water etc. This doesn't mean multithreading is out of the question--with more than one logical processor, the workload can still be divided up between the spare processors, each processor can cycle through its pool of callbacks, and a "master" thread can cancel callback threads when things get out of hand. The next problem is amortizing the cost of starting and killing threads, but with proper design, this should rarely be necessary anyways.

And with your average game having dozens or even hundreds of NPCs per map, it can scale to more logical cores than either Intel or AMD supports.

The only problem is getting there with a stable multithreaded model. As it is, Intel's clearly limited to a paper launch, and game developers may or may not feel like targeting vaporware technology. By the time game developers are ready for the tech, AMD will likely have desktop dual-cores in decent supply. Hell, AMD may have that before Intel's "launch" makes it off of paper.

<i>"Intel's ICH6R SouthBridge, now featuring RAID -1"

"RAID-minus-one?"

"Yeah. You have two hard drives, neither of which can actually boot."</i>