????
<font color=purple> "Stop Whining!"
Ed Stroligo - 9/15/05
Computers haven't gotten much faster the last few years. It doesn't look like they're going to get a whole lot faster for most purposes the next few years, either.
Yes, the CPU makers have real problems; I know that all too well. However, no one else seems willing to pick up the ball and run with it.
Whenever programmers open their mouths these days, it seems like all they do is whine about their multithreaded fate. For once, the burden is on them to get their programs running faster, rather than let Intel or AMD do it for them, and what are they doing?
Are they saying, "We look forward to the challenge," or show enthusiasm about new opportunities? I've seen little of that. No, instead I see moaning and groaning because the old dogs will have to learn some new tricks.
I'm not trying to minimize the difficulties; I'm complaining about the attitude. It sounds more like politicians from a certain waterlogged place talking than Geek Valhalla.
Even some of the hardware people are getting into the act, saying that more cores rather than faster cores will bottleneck their video cards.
Well, gee, isn't that too bad! Guess you'd better go out of business.
That's what it boils down to: Adapt or die. The world is going to change to multicores, multithreading for all but the mundane stuff; probably change for good. If you don't do it, somebody else will, and soon after that, you'll become a trivia question.
You know, it's very unbecoming of geek gods to gripe so greatly. Makes them seem all too . . . human.
Perhaps more importantly for the overall industry, how can one expect mere enthusiasts to run out and buy these technologies when those who'll have to make it go have such a "can't do" attitude?
If the hardware doesn't get much better, and the programming doesn't get much better, why buy?
Ed</font color=purple>
What examples?
And I'm sorry, but not only does Ed sound like an arse, but he also doesn't seem to have a clue. Programmers aren't whining because they don't want to adapt. Look at how many have adapted to optimizing for unique architectures like Netburst, K7, K8, etc.. Look at how many have adapted to new environments such as .NET, Qt, etc.
What Ed seems to be oblivious about is two fold:
1) Multithreading code is easy. Multithreading code without creating timing bugs, memory leaks, etc. is f'ing hard. You need a multitude of different platforms (since you need different timings to even find timing bugs) that incurs a nasty expense. You'll have a hell of a time debugging because with multiple threads stepping through code becomes downright difficult to impossible. (Compared to it being the walk in the park that it used to be.) And you'll generally need to completely redesign your code from the ground up to even make it possible, which is a hell of a lot more than a simple port or adaptation and will create numerous new bugs in the redesign (since no one is perfect).
2) Even after all of that great expense and hassle to get <i>good</i> multithreaded code, chances are that you'll maybe gain a 5-20% performance boost from your second CPU and multithreaded code as most of the time you won't really be able to run multiple simultaneous heavy-use threads anyway. And on single CPU computers you'll often see a performance <i>drop</i> from all of your thread interaction and timing code.
What Ed doesn't seem to have a clue about is that in the vast majority of cases multithreading <i>isn't</i> a useful answer. There are a considerable number of programs that just simply <i>can't</i> gain from it, and of those that can gain, it definately won't be even close to a 1:1 scale for each new proc/core. And that's after spending well over <i>four times</i> the resources in time, money, manpower, etc. to multithread your code. And even then there is always the chance of yet another a timing bug just waiting to be found in the one of many configurations that you didn't use in testing.
Other than the few highly parallel programs out there, the only real advantage to multiple procs/cores is that you can run multiple high-usage independant programs at the same time without them strangling each other. Whoop-de-doo.
At least with Intel's HT concept your 'real' core can access all of the execution units that your 'fake' core isn't using, thereby maximizing the CPU's productivity. Where as dual-core quite often just leaves one processor sitting around doing almost nothing. Even if a dual-core's two core's CPU usages are high (to my knowledge) that's just indicating high instruction throughput, not high execution unit usage. Not that HT is any ultimate answer either, but technically speaking, a processor would be a lot better off with a large bank of execution units and cache shared by several seperate instruction handlers than by wholely seperate cores. You're really a lot better off improving instruction-level parallelism than you are improving thread-level parallelism. Muticore CPUs are definately no panacea.
یί∫υєг ρђœŋίχ
<font color=red><i>Deal with the Devil. He buys in bulk.</i></font color=red>
@ 197K of 200K!