Sign in with
Sign up | Sign in
Your question

Intel says 64bit is unnecessary

Last response: in CPUs
Share
September 17, 2003 12:50:50 AM

From Anandtech

Quote:
Well, Intel's Chief Technology Officer seems to think that 64-bit desktop computing is not needed right now in the industry. He also went on to say that AMD and Apple are getting a little ahead of themselves by releasing 64-bit chips now. Sour grapes on Intel's part or do you think he is right? We've already got a discussion going on about this topic right here in General Hardware:

AMD and Apple are touting 64-bit computing on the desktop far too quickly, Intel CTO Pat Gelsinger said today.


Moving beyond 32-bit addressing is "really not needed for several more years", he told reporters attending the Intel Developer Forum in San Jose.


AMD, of course, isn't going to wait that long. Next week, the company will unveil its long-awaited 64-bit desktop processor, the Athlon 64. And, just a few weeks ago, Apple began shipping its Power Mac G5 desktop based on the 64-bit IBM PowerPC 970 processor.


But if Gelsinger's comments are anything to go by, Intel believes its rivals are coming to market too early.


I would have to agree with them.
BUT, seems whenever Intel deems 64bit 'appropriate' for consumer use and it gains massive popularity...which sounds like 3dfx concerning how we didnt need 32bit color.
Your old A64 is going to be faster in the inevitable 64bit future than your old P4 or Prescott.
So its still a better long term investment if you want the most longevity.
Its also funny how they have 3.2Ghz P4s available, no one needs that either in a consumer PC.

I love the comments posted at anandtech articles, you THG peeps should be reading them and slap some of that intel preference from out of your filthy little mouths!

Athlon 1700+, Epox 8RDA (NForce2), Maxtor Diamondmax Plus 9 80GB 8MB cache, 2x256mb Crucial PC2100 in Dual DDR, Radeon 9800NP, Audigy, Z560s, MX500
September 17, 2003 1:19:26 AM

Quote:
Its also funny how they have 3.2Ghz P4s available, no one needs that either in a consumer PC.

Well, <i>something</i> has to be the flagship CPU that no one really needs. If the fastest was 1.5, programs would probably not be as demanding, and then people would say no one needs a 1.5.
September 17, 2003 2:15:15 AM

Gaming should be able to make use of the 64 bit, guess Intel doesn't consider this a good enough reason.
Related resources
September 17, 2003 2:18:56 AM

What would it use really?

No game breaches 4GB addressing. Only the A64 registers are used, not 64-bit itself.

--
<A HREF="http://www.lochel.com/THGC/album.html" target="_new"><font color=blue><b>Are you ugly and looking into showing your mug? Then the THGC Album is the right place for you!</b></font color=blue></A>
September 17, 2003 2:23:05 AM

Quote:
I would have to agree with them.


Wow, Kinney, talk about double-standards.

You desperatly need a processor with 64-bit which you will throw away likely in 2-3 years for another one, and in that timeline 64-bit won't be making you buy 4GB of RAM but rather giving you performance by extra registers, and not the actual bitwidth, but then you agree with Intel's CTOs...


--
<A HREF="http://www.lochel.com/THGC/album.html" target="_new"><font color=blue><b>Are you ugly and looking into showing your mug? Then the THGC Album is the right place for you!</b></font color=blue></A>
September 17, 2003 2:33:20 AM

But isn't a 64 bit Operating System needed to go along with the CPU ?? Which is still a "bit" in the future....
September 17, 2003 3:02:14 AM

LoL toss yer money away to the Athlon 64 and I am willing to say that you wount be useing the 64bit functions this year. Willing to bet next year as well, oh there will be benchmark software for sure but otherwise real world software no.

But hey its not my money Im happy with my little 32bit chip running happpy supported and lots of 32bit software. Enjoy the waste land your about to step into...

-Jeremy

:evil:  <A HREF="http://service.futuremark.com/compare?2k1=6940439" target="_new">Busting Sh@t Up!!!</A> :evil: 
:evil:  <A HREF="http://service.futuremark.com/compare?2k3=1228088" target="_new">Busting More Sh@t Up!!!</A> :evil: 
September 17, 2003 4:47:22 AM

There's also the problem that people assume that when Intel comes out with their desktop 64-bit chip, it'll be compatible with x86-64. If it's not (and I'm willing to bet it won't be) then this whole "adopt it now" mentality is a bit narrowsighted. Are people who adopted 3dNow! when it came out enjoying the benefits of it years down the line and is Intel regretting implementing SIMD a year or so later? Nope, the majority of things uses SSE and all those who adopted 3dNow! early are sitting there with K6-2's which are outdated anyway. Yes, you can run the 3dNow! software that came out later, just like you will be able to run x86-64 applications with a K8 later, but by the time that software comes out.....your processor is obselete and you'll need to buy a new one anyway.

"We are Microsoft, resistance is futile." - Bill Gates, 2015.
a b à CPUs
September 17, 2003 4:57:13 AM

Someone had to break the mold. 32-bit computing will probably dominate for the next 4 years, which means a new A64 will be worthless before it's new feature is worthwhile. But because that CPU exists, 64-bit programing will at least start to get a foothold in PC programs.

<font color=blue>Only a place as big as the internet could be home to a hero as big as Crashman!</font color=blue>
<font color=red>Only a place as big as the internet could be home to an ego as large as Crashman's!</font color=red>
September 17, 2003 5:14:18 AM

But that's exactly the point. IF Intel adopted x86-64 (or publically supported it), software *may* become widely available. Even if AMD managed to saturate *all* of their market to x86-64, that'd *still* only be what, 15% of the market that x86-64 software vendors will be catering to?
And by the time Intel "joins in", they will probably have their own ISA and we'd be really no better off because x86-64 software *won't work on there anyway*. So no, they're not "breaking the mold" with x86-64, they're simply presenting a new mold which may or may not lead to what future software will use.
This is, of course, mostly Intel's fault for not adopting x86-64, but the fact lies that it won't be the pleasant world of "everything is 64-bit and will work on all future processors" that people imagine.

"We are Microsoft, resistance is futile." - Bill Gates, 2015.
September 17, 2003 5:42:33 AM

Desktops right now don't need 64bit, that's true.

But I think this is a smart move on AMD's part. If AMD pulls this off cleanly and correctly, they will force Intel into using AMD64. If that occurs, that's a big blow against intel, almost more important than having the fastest cpu as long as they keep up in performance.

The most interesting thing is seeing windows xp 64 run both 32bit and 64bit programs. That's a GREAT transition from 32bit to 64bit. It may be interesting to see how Unreal 64 turns out.
September 17, 2003 9:12:59 AM

Yea but intels hyper threading and 800mhz fsb are real necessary... Intel what a bunch of wnkers...

If he doesn't die, he'll get help!!!
September 17, 2003 10:50:03 AM

Quote:
And by the time Intel "joins in", they will probably have their own ISA and we'd be really no better off because x86-64 software *won't work on there anyway*.

But if A64's 32-bit performance <i>is</i> as 'intel-bashing' as some seem to think, <i>and</i> scotty has 'issues'.. then AMD might be able to increase their Market share, and if they can get just a few more %, that would go a long way to encouraging Software Developers to develop X86-64, and might force intel to somehow include support for it in whatever 64-bit desktop chip they produce, as they can't just completely ignore a comparatively large % of the people. I guess it just depends on how long Intel waits before introducing its own 64-bit stuff.

Just some thoughts.



---
The end is nigh.. (For this post at least) :smile:
September 17, 2003 11:50:01 AM

I'm getting into programming as a Computer Science major, and from what little I know, programmers would rather work with 64bits rather than 32bits. More space for them to work. But then again working with registers is all low level code stuff (ie hex, or op codes) so once someone makes a decent compiler for 64bit (and I imagine they have something close to it already since there are 64bit processors out there) then all the programmer needs to do is recompile his/her code with the new compiler and poof! he/she has a 64bit program.
I'm still learning so my opinion has probably got a few holes

<font color=blue><b>Purchase object A, install object A, curse object A, repeat...</b></font color=blue>
September 17, 2003 3:55:55 PM

So Kinney, the way I see it, our conversation in the GFX forum pretty much sums up how your "Facts" were nothing but personal opinions which as I told you or hinted, if you brought them here you'll see how you have a weak mindset.

You just want to buy the A64 because it has 64-bit, won't consider anything else that doesn't, and yet the people here, far more educated on this than you, tell you it's absurd.

--
<A HREF="http://www.lochel.com/THGC/album.html" target="_new"><font color=blue><b>Are you ugly and looking into showing your mug? Then the THGC Album is the right place for you!</b></font color=blue></A><P ID="edit"><FONT SIZE=-1><EM>Edited by Eden on 09/17/03 12:00 PM.</EM></FONT></P>
September 17, 2003 5:45:32 PM

<b>to no one in particular:</b>
Personally I agree with Intel. There's no real reason to go to a 64-bit processor yet. However I also think that there's no real harm in companies doing so anyway because eventually it will happen. It's just still years off before it starts to really make sense for most people.

<b>simwiz2:</b>
Quote:
If the fastest was 1.5, programs would probably not be as demanding, and then people would say no one needs a 1.5.

Only someone who never had to write software during the "640KB" era could give such an untrue answer. There have been several times in the past where hardware has not advanced fast enough and software authors have pulled hair trying to optimize their code to squeeze every last bit of performance out of the hardware that was available. Only because of the Intel/AMD speedwar a few years ago did the advancement of hardware actually outrun the advancement of software. Oh how cynical people have gotten in just a couple of years. :( 

<b>endyen:</b>
Quote:
Gaming should be able to make use of the 64 bit, guess Intel doesn't consider this a good enough reason.

<i>Gaming</i> can (and usually does) make use of <i>any</i> optimization. If Intel were to double the number of 32-bit general-purpose registers in their x86 CPUs you'd see just as much of a performance gain. Hardly any of these optimizations are actually using 64-bitness. They're almost entirely just using the extra general-purpose registers in for 32-bit math.

<b>TomV:</b>
Quote:
But isn't a 64 bit Operating System needed to go along with the CPU ?? Which is still a "bit" in the future....

There have been AMD64 versions of Linux and beta AMD64 versions of Windows for a little while now.

<b>imgod2u:</b>
Quote:
There's also the problem that people assume that when Intel comes out with their desktop 64-bit chip, it'll be compatible with x86-64. If it's not (and I'm willing to bet it won't be) then this whole "adopt it now" mentality is a bit narrowsighted.

I completely agree. Granted, someone has to pave the way still, but for most people it's a waste.

I'd also be highly willing to bet that when Intel comes out with x86-64 (and it probably will happen unless they jump right straight to IA64) it'll be with even more GPRs (probably a <i>lot</i> more) and will be a much more idealistic (pragmatic?) extension of the instructions so that low-level code <i>will</i> have to be completely ported to fully utilize 64-bitness. (Thus preserving the 16-bit addressing even when in simultaneously using 64-bit instructions.) I'd bet Intel would also release compilers that make this transition easy and possibly even transparent if you just want the lazy porting that AMD is pushing.
Quote:
but by the time that software comes out.....your processor is obselete and you'll need to buy a new one anyway.

Funny how so many people miss the obviousness of this. :o 
Quote:
And by the time Intel "joins in", they will probably have their own ISA and we'd be really no better off because x86-64 software *won't work on there anyway*.

I don't agree. It'll probably end up more like the various pixel shader standards in graphics cards. Once the incompatabilities of the standards occur software will just have to add a layer of abstraction pinned in place by lower-level standard-specific code branches. It'll be a pain in the arse, but compatability will remain. It'll just mean that if the newer ISA is more robust (which I'm betting it would be) then there will be a noticable performance difference that will entice people to upgrade to the new ISA. But software will find a way to preserve compatability.

<b>Crashman:</b>
Quote:
Someone had to break the mold. 32-bit computing will probably dominate for the next 4 years, which means a new A64 will be worthless before it's new feature is worthwhile. But because that CPU exists, 64-bit programing will at least start to get a foothold in PC programs.

I pretty much agree. The only difference is that I would say that it's the 64-bit mentality of customers that will get a foothold and software will be slowly be forced to comply to that in order to keep the sales figures up.

<b>TknD:</b>
Quote:
If AMD pulls this off cleanly and correctly, they will force Intel into using AMD64.

No it won't. If anything it will convince Intel to release their own flavour of x86-64 that probably <i>won't</i> be compatible with AMD64. However since Intel has so much invested in IA64 I'd dare say that Intel is working feverishly on finding a good way to crossbreed that with IA32.

<b>ChipDeath:</b>
Quote:
then AMD might be able to increase their Market share, and if they can get just a few more %, that would go a long way to encouraging Software Developers to develop X86-64, and might force intel to somehow include support for it in whatever 64-bit desktop chip they produce, as they can't just completely ignore a comparatively large % of the people.

Close, but no tequila. As far as CPU-specific optimizations go, software developers fall into two categories: those who do, and those who don't. Those who do will do so anyway because they have the resources to invest in that many additional manhours. Games usually fall into that category. Those who don't won't do so unless there's a majority of market share involved (I.E. 50%). So just a few more percent for AMD won't change the numbers of programmers optimizing because either they already plan to optimize or they already plan to not waste resources supporting it.

However, you're right on the Intel side. Consumers will decide that 64-bit matters, whether or not it actually does anything for them. Hype and a market driven by ignorance is a dangerous combination. So eventually Intel's marketing will lean on the engineers enough to make something happen just so that Intel doesn't lose too many sales.

<b>ImpPatience:</b>
Quote:
I'm getting into programming as a Computer Science major, and from what little I know, programmers would rather work with 64bits rather than 32bits. More space for them to work.

If anyone tells you this, punch them in the nose! That's <i>soooooooo</i> not true. Programmers want more general-purpose registers. Programmers however have access to 64-bit integers by using a 32-bit low segment and 32-bit high segment. It's slow, but it's also hardly ever needed. If suddenly all integers were 64-bit by standard though, it'd drive most software engineers bonkers as their software's memory usage would have just doubled for no good reason!

Further, in <i>most</i> cases where you need more accuracy than a 32-bit integer can provide you use a floating-point math already. So this all makes the actual usefulness of 64-bit integers pretty slim.

<pre><A HREF="http://ars.userfriendly.org/cartoons/?id=20030905" target="_new"><font color=black>People don't understand how hard being a dark god can be. - Hastur</font color=black></A></pre><p>
September 17, 2003 6:53:24 PM

It's about time they bring this old 64 bit back.
September 17, 2003 7:13:21 PM

Quote:
I'd also be highly willing to bet that when Intel comes out with x86-64 (and it probably will happen unless they jump right straight to IA64) it'll be with even more GPRs (probably a lot more) and will be a much more idealistic (pragmatic?) extension of the instructions so that low-level code will have to be completely ported to fully utilize 64-bitness. (Thus preserving the 16-bit addressing even when in simultaneously using 64-bit instructions.) I'd bet Intel would also release compilers that make this transition easy and possibly even transparent if you just want the lazy porting that AMD is pushing.

Erm, no. Increasing the register count would mean you'd have to add yet *another* processor mode and more x86 prefixes to your instructions. This will not only cause software developers a load of pain, it'll also be very expensive to implement on a chip, it has to support *both* x86-64 mode *and* the mode that supports 32 GPR's.
As for Intel releasing an x86-64 compatible chip, highly unlikely. Unless, of course, they're willing to just throw IA-64 out the window, which they're not likely to do. More likely they'll just improve their emulation of x86 (having that marvelous Alpha developement team work on it) and release a low-priced, high-performing IA-64 chip.

Quote:
I don't agree. It'll probably end up more like the various pixel shader standards in graphics cards. Once the incompatabilities of the standards occur software will just have to add a layer of abstraction pinned in place by lower-level standard-specific code branches. It'll be a pain in the arse, but compatability will remain. It'll just mean that if the newer ISA is more robust (which I'm betting it would be) then there will be a noticable performance difference that will entice people to upgrade to the new ISA. But software will find a way to preserve compatability.

There is a *huge* difference between various pixel shaders and a CPU ISA. The pixel shader standards are high-level standards. I.e. they're guidelines you use in DX programming. At the lower level, you have the driver interpreting these commands and compiling it to the proprietary ISA (which, as I recall, is VLIW) of the GPU. CPU's do not have that luxury (unless you're running an interpreted language like Java or .Net). The software has to directly access the ISA and to put *that* many ISA supports on the processor itself instead of in an emulation layer of software would be *very* expensive and infeasible.
Now, with Intel's recent adamant move of bringing high-performance x86 emulation to IA-64, it could be possible they may throw in x86-64 support in there, but why would they want to support something that'll directly compete with IA-64? Did SSE supporting processors support 3DNow! or did Intel simply tell software developers "I'm big, support me or die, drop 3DNow!". Well, how many 3DNow! optimized applications do you see out there now?

Quote:
Close, but no tequila. As far as CPU-specific optimizations go, software developers fall into two categories: those who do, and those who don't. Those who do will do so anyway because they have the resources to invest in that many additional manhours. Games usually fall into that category. Those who don't won't do so unless there's a majority of market share involved (I.E. 50%). So just a few more percent for AMD won't change the numbers of programmers optimizing because either they already plan to optimize or they already plan to not waste resources supporting it.


I would say it's the exact opposite. Game developers are usually more constricted by time-to-market vs optimization. Yes, they do a lot of high-level optimization in DX to run better on various video cards (this has only been a recent thing though) but as far as low-level CPU optimization, well, how many SSE2 supporting games do you see out there?
Now consider Lightwave or Photoshop.

"We are Microsoft, resistance is futile." - Bill Gates, 2015.
September 17, 2003 8:43:52 PM

Quote:
Erm, no. Increasing the register count would mean you'd have to add yet *another* processor mode and more x86 prefixes to your instructions. This will not only cause software developers a load of pain, it'll also be very expensive to implement on a chip, it has to support *both* x86-64 mode *and* the mode that supports 32 GPR's.

Depending on how it's implemented it might not cause software developers any pain at all. It certainly doesn't have to be as painful as the 16-bit to 32-bit process was. (And even then the port was pretty easy for lazy programmers if you didn't mind wasting a lot of memory to do the same thing.)

However what if Intel implemented x86-64 in a way that all of the 32-bit and 16-bit instructions and register names still worked exactly as they do now and the 64-bit registers were just added to the list of available registers and the 64-bit instructions were of a similar naming convention but with just a small spelling change. It could easily be done in such a way that all old code would still run exactly as it did before and that the 64-bit processing was just an extension.

Quote:
As for Intel releasing an x86-64 compatible chip, highly unlikely. Unless, of course, they're willing to just throw IA-64 out the window, which they're not likely to do. More likely they'll just improve their emulation of x86 (having that marvelous Alpha developement team work on it) and release a low-priced, high-performing IA-64 chip.

I don't know. In a perfect world I'd agree with you, but in reality I'm not sure how feasable that is. The Itaniums don't have the clockspeed needed to execute x86 well. Itanium is <i>too</i> designed for out of order execution to emulate the linearity of x86 in a way that <i>wouldn't</i> piss customers off when their new hybrid chip ran much slower than a top-notch P4.

Of course if they could just find a way to add the IA64 instruction set to a P4 in a way that wouldn't totally suck (as in 64-bit execution that at least matched 32-bit execution despite the architecture differences) even if it performed worse than an Itanium it'd still have potential. (So I guess that'd be the reverse of what Intel is trying now... to emulate the Itanium side instead of emulate the P4 side.)

Quote:
There is a *huge* difference between various pixel shaders and a CPU ISA. The pixel shader standards are high-level standards. I.e. they're guidelines you use in DX programming. At the lower level, you have the driver interpreting these commands and compiling it to the proprietary ISA (which, as I recall, is VLIW) of the GPU. CPU's do not have that luxury (unless you're running an interpreted language like Java or .Net). The software has to directly access the ISA and to put *that* many ISA supports on the processor itself instead of in an emulation layer of software would be *very* expensive and infeasible.

Again, that depends on exactly <i>how</i> it was done. Depending on how the two different x86-64 ISAs were written, it <i>could</i> be possible that the primary differences are just in the number of GPRs and the instructions themselves. In which case it's just a simple remap layer to make one of the standards run with code from the other. I never said that it wouldn't be a pain in the arse or that it wouldn't have the possability of being a minor performance hit. But considering just how few x86 commands are literal to the CPU anymore it's hardly crazy to consider as a feasable possibility.

Hell, if Transmeta were to design a 64-bit processor the world could be a very scary place. (And if they were to team up with any other processor manu to use the combined technical knowhow to boost performance it'd be even scarier.) So don't tell me that it's not possible or even feasable. It's just not a normal mode of thought is all.

Quote:
Now, with Intel's recent adamant move of bringing high-performance x86 emulation to IA-64, it could be possible they may throw in x86-64 support in there, but why would they want to support something that'll directly compete with IA-64? Did SSE supporting processors support 3DNow! or did Intel simply tell software developers "I'm big, support me or die, drop 3DNow!". Well, how many 3DNow! optimized applications do you see out there now?

That's hardly a fair comparison. 3DNow! wasn't an Intel product. It was a direct competitor to a standard that they had already been working on, so of course Intel wasn't going to support it.

A new x86-64 ISA from Intel however could easily coexist with IA64. If it didn't perform 'as' well as IA64 then there'd still be a justification for IA64. I mean sure, it might eat a chunk from the low-end Itanium market, but both are still money in Intel's pocket. And since Flops are still x87 even on an x86-64 CPU, then Itanium has nothing to fear since it's the Flop king and the P4 isn't.

Quote:
I would say it's the exact opposite. Game developers are usually more constricted by time-to-market vs optimization.

What world do you live in? Games are hardly ever concerned about time-to-market. And ever since 3D hardware became common on home PCs game development has <i>always</i> been about optimizing for as many different standards as possible. Until DX and OpenGL became mainstream audio and video <i>had</i> to be writte for numerous different paths. And what game <i>isn't</i> highly optimized? I'm sorry, but you're so incredibly wrong on this one. The game market is driven much more by extreme optimization on as many pieces of hardware as possible than it is by time-to-market. The engines themselves take <i>years</i> to write.

Quote:
Yes, they do a lot of high-level optimization in DX to run better on various video cards (this has only been a recent thing though) but as far as low-level CPU optimization, well, how many SSE2 supporting games do you see out there?

Do you even remember the MMX frenzy that gamers and game coders whipped themselves into? Do you know of a single game released recently that will run on a processor that doesn't support at least that if not 3DNow!/SSE?

Quote:
Now consider Lightwave or Photoshop.

What about them? Photoshop is a perfect example of software optimized far more for a Mac than for a PC, and Lightwave is a perfect example of software optimized far more for a P4 than for an Athlon. Two perfect cases where development was targetted at the platform of their majority user base and <i>not</i> optimized well for any other platforms. You couldn'y have handed me more perfect examples of my point if you'd tried.

<pre><A HREF="http://ars.userfriendly.org/cartoons/?id=20030905" target="_new"><font color=black>People don't understand how hard being a dark god can be. - Hastur</font color=black></A></pre><p>
September 17, 2003 8:58:38 PM

Quote:
I don't know. In a perfect world I'd agree with you, but in reality I'm not sure how feasable that is. The Itaniums don't have the clockspeed needed to execute x86 well. Itanium is too designed for out of order execution to emulate the linearity of x86 in a way that wouldn't piss customers off when their new hybrid chip ran much slower than a top-notch P4.

Of course if they could just find a way to add the IA64 instruction set to a P4 in a way that wouldn't totally suck (as in 64-bit execution that at least matched 32-bit execution despite the architecture differences) even if it performed worse than an Itanium it'd still have potential. (So I guess that'd be the reverse of what Intel is trying now... to emulate the Itanium side instead of emulate the P4 side.

Consider this, a dual core chip. One core built on IA64 architecture and the other built on IA32 (Vanderpool Technology?)



<font color=white>---</font color=white>
Wanted: Large breasted live-in housekeeper. Must be a good cook, organized, and willing to pick up after me.
September 18, 2003 1:04:35 AM

No its not a double standard.
They may be right, but that doesnt really mean the prescott is going to be a better buy than the A64.

Athlon 1700+, Epox 8RDA (NForce2), Maxtor Diamondmax Plus 9 80GB 8MB cache, 2x256mb Crucial PC2100 in Dual DDR, Radeon 9800NP, Audigy, Z560s, MX500
September 18, 2003 1:09:00 AM

Linux already supports x64 extensions.
64bit windoze is supposed to be out by xmas.

Athlon 1700+, Epox 8RDA (NForce2), Maxtor Diamondmax Plus 9 80GB 8MB cache, 2x256mb Crucial PC2100 in Dual DDR, Radeon 9800NP, Audigy, Z560s, MX500
September 18, 2003 1:10:21 AM

I will also be using 32bit software.

Then if there is a 'killer app' or blowout must have 64bit program out there, I will be able to use it.
Happily.
:smile:

Athlon 1700+, Epox 8RDA (NForce2), Maxtor Diamondmax Plus 9 80GB 8MB cache, 2x256mb Crucial PC2100 in Dual DDR, Radeon 9800NP, Audigy, Z560s, MX500
September 18, 2003 1:14:42 AM

Comparing 64bit computing to 3dnow is not apples to apples.
So, don't buy a DX9 video card now because there are no games out for it?
I doubt anyone would recommend such a thing.

In the future, you might not be able to run software with a 32bit only processor, at all.

Obsolete or not at least the thing will be able to run 64bit linux or the upcoming 64bit windows effectively.

Athlon 1700+, Epox 8RDA (NForce2), Maxtor Diamondmax Plus 9 80GB 8MB cache, 2x256mb Crucial PC2100 in Dual DDR, Radeon 9800NP, Audigy, Z560s, MX500
September 18, 2003 1:17:58 AM

Ah, the judgemental and godlike eden graces me with his decision on my f*cking intelligence yet again.

At least I dont pretend to be a CPU genius that reads ars technica and all the sudden has his PHD, your the expert.

You bring alot to the table yourself with replys like that a$$clown.

edit- actually just forget this rude post. I realize your mad because I have schooled your a$$ hard twice now... with less ars technica quotes than yourself.

Athlon 1700+, Epox 8RDA (NForce2), Maxtor Diamondmax Plus 9 80GB 8MB cache, 2x256mb Crucial PC2100 in Dual DDR, Radeon 9800NP, Audigy, Z560s, MX500<P ID="edit"><FONT SIZE=-1><EM>Edited by kinney on 09/17/03 08:32 PM.</EM></FONT></P>
September 18, 2003 2:31:07 AM

You did, I feel so hurt now, I should've known better than this that 64-bit freaks were right.

My deepest sincere apologies oh great 64-bit god.

--
<A HREF="http://www.lochel.com/THGC/album.html" target="_new"><font color=blue><b>Are you ugly and looking into showing your mug? Then the THGC Album is the right place for you!</b></font color=blue></A>
September 18, 2003 3:51:45 AM

ah good one

one thing about you, i see how you got your post count up so high

your posts consist of zero information and 100% BS

Athlon 1700+, Epox 8RDA (NForce2), Maxtor Diamondmax Plus 9 80GB 8MB cache, 2x256mb Crucial PC2100 in Dual DDR, Radeon 9800NP, Audigy, Z560s, MX500
September 18, 2003 3:53:59 AM

Eden hes a fagboy fanboy we cant do much for him now. Hell get his CPU one day and realize he cant run anything 64bit. Oh well Ill be happy with my 32bit CPU anyways.

-Jeremy

:evil:  <A HREF="http://service.futuremark.com/compare?2k1=6940439" target="_new">Busting Sh@t Up!!!</A> :evil: 
:evil:  <A HREF="http://service.futuremark.com/compare?2k3=1228088" target="_new">Busting More Sh@t Up!!!</A> :evil:  <P ID="edit"><FONT SIZE=-1><EM>Edited by spud on 09/17/03 11:56 PM.</EM></FONT></P>
September 18, 2003 4:51:18 AM

Quote:
Depending on how it's implemented it might not cause software developers any pain at all. It certainly doesn't have to be as painful as the 16-bit to 32-bit process was. (And even then the port was pretty easy for lazy programmers if you didn't mind wasting a lot of memory to do the same thing.)

If your software consists of only high-level code and the implementation was in a way that accelerated the old style optimizations, I *may* agree with you. However, Intel has never made a habit of doing this and there are technical reasons why.

Quote:
However what if Intel implemented x86-64 in a way that all of the 32-bit and 16-bit instructions and register names still worked exactly as they do now and the 64-bit registers were just added to the list of available registers and the 64-bit instructions were of a similar naming convention but with just a small spelling change. It could easily be done in such a way that all old code would still run exactly as it did before and that the 64-bit processing was just an extension.

The extended registers alone would mean that *every* x86-64 instruction would have to be duplicated (in terms of the internal micro-code recognition). There may not be much more added to the x86 code itself (as it requires only a prefix) but internally, there will need to be a micro-op for IA-32, a micro-op for x86-64, *and* a micro-op for Intel's x86-64. That's a *lot* of work and far too costly.

Quote:
I don't know. In a perfect world I'd agree with you, but in reality I'm not sure how feasable that is. The Itaniums don't have the clockspeed needed to execute x86 well. Itanium is too designed for out of order execution to emulate the linearity of x86 in a way that wouldn't piss customers off when their new hybrid chip ran much slower than a top-notch P4.

Itaniums are in-order execution. And there's no technical reason why Itaniums couldn't be hyper-pipelined and clocked higher like the P4. In fact, due to the lack of x86 decoders and even distribution of functions, it could potentially clock *higher* with a 20-stage pipeline than the P7 core.

Quote:
Of course if they could just find a way to add the IA64 instruction set to a P4 in a way that wouldn't totally suck (as in 64-bit execution that at least matched 32-bit execution despite the architecture differences) even if it performed worse than an Itanium it'd still have potential. (So I guess that'd be the reverse of what Intel is trying now... to emulate the Itanium side instead of emulate the P4 side.)

Way too expensive in terms of transistor count. This may be feasible, but a software emulation that offers 80-90% native IA-64 speed (like the FX32! did for Alpha) while running x86 would be much more preferable.

Quote:
Again, that depends on exactly how it was done. Depending on how the two different x86-64 ISAs were written, it could be possible that the primary differences are just in the number of GPRs and the instructions themselves. In which case it's just a simple remap layer to make one of the standards run with code from the other. I never said that it wouldn't be a pain in the arse or that it wouldn't have the possability of being a minor performance hit. But considering just how few x86 commands are literal to the CPU anymore it's hardly crazy to consider as a feasable possibility.

Again, the revision to the ISA may not be large, but in order to offer full x86-64 compatibility, your implementation would be huge in terms of the internal micro-code.

Quote:
Hell, if Transmeta were to design a 64-bit processor the world could be a very scary place. (And if they were to team up with any other processor manu to use the combined technical knowhow to boost performance it'd be even scarier.) So don't tell me that it's not possible or even feasable. It's just not a normal mode of thought is all.

Transmeta would be very similar to what I think Intel would do, dynamic translation. Only they did it in a firmware, I think Intel would rather go software. However, both methods dynamically translate x86 instructions into a VLIW ISA.

Quote:
That's hardly a fair comparison. 3DNow! wasn't an Intel product. It was a direct competitor to a standard that they had already been working on, so of course Intel wasn't going to support it.

And IA-64 isn't a standard Intel is already working on? Like it or not, Intel plans on moving IA-64 down to workstation and desktop and x86-64 directly competes against this.

Quote:
A new x86-64 ISA from Intel however could easily coexist with IA64. If it didn't perform 'as' well as IA64 then there'd still be a justification for IA64. I mean sure, it might eat a chunk from the low-end Itanium market, but both are still money in Intel's pocket. And since Flops are still x87 even on an x86-64 CPU, then Itanium has nothing to fear since it's the Flop king and the P4 isn't.

SSE is technically superior to 3DNow!, but Intel didn't support that. It's not a matter of "could be done" but a matter of "is it worth it". Is creating 3 different ISA's to compete in the market (and make a mess of software compatibility) worth it? We're not talking about something as simple as SSE or non-SSE as those still exist within the same mode (IA-32 protected mode). x86-64 exists in an entirely different mode of its own and if you were to extend the registers again, you'd need another mode. Software does *not* work on multiple modes with a simple modification of a few files. You'd need 3 different copies. Even if you could label them correctly, that'd be a hassle for consumers.

Quote:
What world do you live in?

The real one.

Quote:
Games are hardly ever concerned about time-to-market.

Are you kidding me? Let's see, release a game 6-months earlier or release a game that runs 1.5x faster......
The market for games is obvious. He who gets the games out faster sells more. The fact of so few games out there that even use SSE to the extent that they could should contest to that. There's high-level optimization, yes, but what game developer is going to spend an extra 6-months, hire assembly programmers, and have them optimize their huge code-base for SIMD?

Quote:
And ever since 3D hardware became common on home PCs game development has always been about optimizing for as many different standards as possible.

High-level optimizations through common windows/dx calls, yes. With help from the manufacturers even using some programming interfaces like Glide, however, *never* low-level optimizations on the graphics card itself. Driver updates later on brings better low-level optimizations.

Quote:
Until DX and OpenGL became mainstream audio and video had to be writte for numerous different paths. And what game isn't highly optimized?

How many fully utilize SSE? How many today even fully utilize the DX8 engine in most DX8 cards? How many today even fully utilizes the hardwired T&L engine in DX7 cards?

Quote:
I'm sorry, but you're so incredibly wrong on this one. The game market is driven much more by extreme optimization on as many pieces of hardware as possible than it is by time-to-market. The engines themselves take years to write.

Support for as many pieces of hardware, perhaps, utilization of that hardware? Hardly. We've only had a few engines out there that even remotely come close to having levels of optimization found in programs such as Lightwave or Photoshop. The Quake3 engine being one.

Quote:
Do you even remember the MMX frenzy that gamers and game coders whipped themselves into? Do you know of a single game released recently that will run on a processor that doesn't support at least that if not 3DNow!/SSE?

MMX frenzy? Do you remember MMX? There was so much hype, then programmers took one look at it, most stayed far away from it.
3DNow! optimized games? The one or two that exist? There's Quake2 and......FF7? And 3DNow! was huge back when it first came out (technically speaking). It had great potential compared to the P2's x87 FPU. For such a technical feat, the adoption could be called nothing more than little.
As for "support" that is quite a different thing than "utilize". If I wrote a program write now, did absolutely no optimizations, downloaded ICC, used the -qax flag, my code "supports" SSE2. Is it properly optimized? not by a long shot. To do so, I'd have to delve into assembly and learn the intricacies of SIMD programming.

Quote:
What about them?

They have quite extensive optimizations for new ISA extensions and architectures, particularly the Pentium 4 and SSE2.

Quote:
Photoshop is a perfect example of software optimized far more for a Mac than for a PC,

Photoshop 7 was written and released on Windows *first* and then *ported* to OSX months later. Looking at the Photoshop filter results from digitalvideoediting.com and Aceshardware, it shows that it's very well optimized for the Pentium 4 and SSE2.

Quote:
and Lightwave is a perfect example of software optimized far more for a P4 than for an Athlon. Two perfect cases where development was targetted at the platform of their majority user base and not optimized well for any other platforms. You couldn'y have handed me more perfect examples of my point if you'd tried.

Hmmm, let's see, how is it optimized for the P4. Let's look at results of the K7 (no SSE2) vs the K8 (with SSE2). Oh wow! Look at that, more gain than in almost any other program. A software company that actually spent the time to hire assembly SIMD programmers to optimize their code for SIMD and......gained a large performance boost. Damn them for embracing new technologies unlike the rest of software out there that hasn't even heard of SSE2, damn those bastards that focus too much on the Pentium 4.

Quote:
Comparing 64bit computing to 3dnow is not apples to apples.


In many ways, I'd argue that the introduction of SIMD FP instructions was more of a revolution than 64-bit is. SIMD had the potential to increase performance by magnitudes in operations that are used today (as we saw in Quake 2 that was optimized for 3DNow! and probably the only example there is). 64-bit's performance increase is limited to encryption and some scientific applications. Oh, and the added 4 GB of memory. It's a shame 3DNow! didn't get as much credit as it deserved. SSE may be technically superior, but it was a year late.

Quote:
So, don't buy a DX9 video card now because there are no games out for it?
I doubt anyone would recommend such a thing.

If in the future, a much bigger company than the one who made the DX9 video card was going to come out with an *entire* different and incompatible card that provided similar functionality but is completely different in terms of software. Well, I wouldn't say hold off but don't expect it to be the clear "paved way to the future".

Quote:
In the future, you might not be able to run software with a 32bit only processor, at all.

Obsolete or not at least the thing will be able to run 64bit linux or the upcoming 64bit windows effectively.


You're buying a chip now for the purpose of running software 4 years from now (which may or may not be compatible with your chip considering it may be IA-64 software) at incredibly slow and unbearable speeds? Great, I'm so glad I bought a 386 instead of a 286 back in the day, now I can run WinXP on it and play Doom3!
As for 64-bit versions of Linux and Windows....tell me what the benefit of those are vs my 32-bit windows right now? By the time I need a 64-bit chip, I'll buy one.

"We are Microsoft, resistance is futile." - Bill Gates, 2015.
September 18, 2003 6:20:11 AM

Quote:

TknD:

In reply to:
--------------------------------------------------------------------------------

If AMD pulls this off cleanly and correctly, they will force Intel into using AMD64.



--------------------------------------------------------------------------------

No it won't. If anything it will convince Intel to release their own flavour of x86-64 that probably won't be compatible with AMD64. However since Intel has so much invested in IA64 I'd dare say that Intel is working feverishly on finding a good way to crossbreed that with IA32.


So you're saying Intel is going to wait on Microsoft, and all the other software companies that are already ready or have begun to support AMD64 to rewrite yet another version of windows, and other software rather than implement AMD64 themselves?

The server market did not accept Itanium well, however that is not the case with Opteron. BIG engineering software is being ported to AMD64 and companies like the opteron/AMD64 since it isn't as big a risk as Itanium due to 32bit compatibility that exists NOW.

If intel wanted an IA64/x86 cross breed, they are long overdue unless they already have one finished.
September 18, 2003 8:30:39 AM

If Intel doesn't think x86-64 is so hot, why are they bringing out a gamer's chip with 2 megs of cache?
September 18, 2003 9:06:44 AM

Quote:
If Intel doesn't think x86-64 is so hot, why are they bringing out a gamer's chip with 2 megs of cache?


To compete with the 1MB L2 cache chip with on-die memory controller comming out soon?
People automatically equate x86-64 with K8. x86-64 is only one *feature* of K8. It's not the sole reason K8 (or a reason at all actually since we're running 32-bit software) performs well.

"We are Microsoft, resistance is futile." - Bill Gates, 2015.
September 18, 2003 9:13:31 AM

Even if you still buy a p4 after the last line of presscott cpu's or maybe even tejas..you will still have to upgrade to some sort of cpu likewise AMD or INTEL to be able to run anything 64-bit...it looks like INTEL is "milking the cow".

---
If you go to work and your name is on the door, you're rich. If your name is on your desk, you're middle class. If your name is on your shirt, you're poor!
<P ID="edit"><FONT SIZE=-1><EM>Edited by pirox on 09/18/03 05:13 AM.</EM></FONT></P>
September 18, 2003 9:24:49 AM

The Prescott will have 1 meg of cache as well. Intel is trying to curry favour with gamers because they are very aware how valid 64 bit is for games. For most opperations the prescott will be more than a match for athlon64, but the area where intel will fall behind is gaming. This will become more drastic as games are set up for 64 bit processing.
September 18, 2003 11:54:13 AM

For the last time, 64-bit IS NOT WHAT AFFECTS GAMING.

It is the GPR extensions! That implies that Intel could theoretically add 8 more GPRs are get the very same help UT2003 64-bit gives!

--
<A HREF="http://www.lochel.com/THGC/album.html" target="_new"><font color=blue><b>Are you ugly and looking into showing your mug? Then the THGC Album is the right place for you!</b></font color=blue></A>
September 18, 2003 11:58:41 AM

You are so right, after 3 years, I'll be a loser on the streets, I'll have lost all my business, having not switched now to 64-bit where all the wise men will go.

Spud, help me! I'm a fanboy, I need therapy!

--
<A HREF="http://www.lochel.com/THGC/album.html" target="_new"><font color=blue><b>Are you ugly and looking into showing your mug? Then the THGC Album is the right place for you!</b></font color=blue></A>
September 18, 2003 1:55:56 PM

Can you start with Kinney? He's been awfully grumpy, I think some back passage action is just the thing he desires right now.

--
<A HREF="http://www.lochel.com/THGC/album.html" target="_new"><font color=blue><b>Are you ugly and looking into showing your mug? Then the THGC Album is the right place for you!</b></font color=blue></A>
September 18, 2003 4:12:13 PM

Quote:
If your software consists of only high-level code and the implementation was in a way that accelerated the old style optimizations, I *may* agree with you. However, Intel has never made a habit of doing this and there are technical reasons why.

Way back years ago I personally handled the conversions from 16-bit x86 Assembly to 32-bit in hundreds of files and it <i>really</i> wasn't all that difficult. Tedious and time consuming, yes. Difficult, no. So even <i>if</i> that was once again required (and how many people even work with low-level programming these days?) it certainly wouldn't be the end of the world. And that was only to recompile the code into 32-bit anyway. Anyone happy to leave their binaries just as they were (which was fine since they still executed just peachy) had no worries at all. However, I highly doubt that such would even be necessary once again.

I'm not saying that Intel <i>will</i> do it that way, just that it's entirely possible to and my personal belief is that <i>if</i> (when?) Intel does move SOHO to 64-bits, they'll do it in this way. And even if not, it's really no big deal. The absolute worst case would still be that old code will still execute and new code with new 64-bit compilers will be pretty easy to port.

Quote:
The extended registers alone would mean that *every* x86-64 instruction would have to be duplicated (in terms of the internal micro-code recognition). There may not be much more added to the x86 code itself (as it requires only a prefix) but internally, there will need to be a micro-op for IA-32, a micro-op for x86-64, *and* a micro-op for Intel's x86-64. That's a *lot* of work and far too costly.

I don't agree. If your mind is stuck in the patterns from decades past, yes. If however you actually live in the present where both the Itanium and the Transmeta Crusoe are testaments to the feasability of doing just this, then you'd see that it's really no problem at all. CPUs are a <i>lot</i> more complicated than they used to be and x86 is more of a compatability standard these days than an actual guideline for how CPUs work internally. Microcode and layer upon layer of cache and execution engines have completely changed the heart of the CPU. So what's one or two more instructions sets to convert to microcode? Nothing really. And compared to the cache it'd hardly be responsible for any of the die size either.

Quote:
Itaniums are in-order execution. And there's no technical reason why Itaniums couldn't be hyper-pipelined and clocked higher like the P4. In fact, due to the lack of x86 decoders and even distribution of functions, it could potentially clock *higher* with a 20-stage pipeline than the P7 core.

I hate to ask, but do you even know anything about Itaniums? They're <i>very</i> much designed for out-of-order execution. That's the whole point of EPIC. That's the whole point of the Itanium. IA64 makes branch-prediction misses harmless by executing code down multiple branches before it even knows which branch it will need. It's taking parallelism to extremes to design an entirely out of order system. That's the very thing that scared the crap out of most of the software engineers who were going to be porting software to IA64. And that's the very reason why it won't clock higher easily and won't execute x86 efficiently. From the ground up it was designed not to, and it really doesn't need to clock higher. It's an execution monster as it is. Most people would call it's methods wasteful. I just call it different. :) 

Quote:
Way too expensive in terms of transistor count.

I doubt it. Transistors are mostly wasted on cache these days. Compared to that, it'd be nothing.

Quote:
This may be feasible, but a software emulation that offers 80-90% native IA-64 speed (like the FX32! did for Alpha) while running x86 would be much more preferable.

Ugh. Could you pick a worse example? :(  I've got a damn bloody expensive Alpha box here under my desk (not by my choice mind you) that does absolutely nothing because the supposed x86 emulation is <i>so</i> poor that it can't even run most software installers, not to mention the software itself. It runs Alpha binaries just fine, but it can't emulate x86 to save it's life. (Litterally, which is why it's just sitting there under my desk doing absolutely nothing.)

Bad examples aside though, a software emulation layer wouldn't be so bad <i>if</i> if actually worked. Intel would most definately be mocked though if that was their solution for bridging the gap between 32-bit and 64-bit.

Quote:
Again, the revision to the ISA may not be large, but in order to offer full x86-64 compatibility, your implementation would be huge in terms of the internal micro-code.

Agreed, but again it's still certainly a feasable solution as the ever-expanding concept of micro-code is taking over the CPU world anyway. Just time and effort are needed. The die space wouldn't even be a concern.

Quote:
Transmeta would be very similar to what I think Intel would do, dynamic translation. Only they did it in a firmware, I think Intel would rather go software. However, both methods dynamically translate x86 instructions into a VLIW ISA.

Intel <i>could</i> go software, but I personally don't think that they would. Too much loss of face would be involved even if it worked absolutely beautifully. And again, there's no reason not to just make it a part of the processor itself. But maybe this is a good point to say let's just agree to disagree? :) 

Quote:
And IA-64 isn't a standard Intel is already working on? Like it or not, Intel plans on moving IA-64 down to workstation and desktop and x86-64 directly competes against this.

Down to workstation I can believe. (Aren'y they already pushing this anyway?) Down to SOHO desktop though I can't... at least not anytime this decade. In the meantime a slightly-worse performaning x86-64 solution to bridge the gap to SOHO 64-bit and run for even as much as the last five years of this decade would be perfectly understandable <i>and</i> not significantly detract from IA64 sales if it was implemented purely to gain back market share taken by AMD and Apple with their hybrids.

Intel can toot the horn of IA64's gee-wiz factor all they like, but at the end of the day the sales just aren't spectacular anyway, and that's in the market that it was specifically designed for. It's chances of having meaning in the SOHO market are even more slim. Only a moron would cling to a barely-floating ship while ocean-liners loaded with crates of money raced by. At some point Intel is bound to see the errors of such lines of thought and cave in with a solution that caters to the desires of the masses.

Quote:
SSE is technically superior to 3DNow!

Not counting SSE2 I thought that it was the other way around...

Quote:
It's not a matter of "could be done" but a matter of "is it worth it". Is creating 3 different ISA's to compete in the market (and make a mess of software compatibility) worth it?

We may have the opportunity to find out. :o  But would it really make any more of a mess of software compatibility than the mess that we already have?

Quote:
We're not talking about something as simple as SSE or non-SSE as those still exist within the same mode (IA-32 protected mode). x86-64 exists in an entirely different mode of its own and if you were to extend the registers again, you'd need another mode.

Maybe yes, maybe no. It depends entirely on how it is implemented. It's a world of opportunity to try something new. Maybe Intel will do just that. And then again maybe they'll purposefully make it as incompatible as possible to pin AMD into as small of a market as possible. Who knows what their plans are? I don't even think that anyone at Intel knows these answers yet. My personal opinion though is that Intel just may try something new and interesting.

Quote:
Software does *not* work on multiple modes with a simple modification of a few files. You'd need 3 different copies. Even if you could label them correctly, that'd be a hassle for consumers.

You would need three different compilations of the binaries, yes. Whoever said that they had to be labelled at all though? It's entirely possible to put all three versions of the binaries into a single distribution with a hybrid front-end so that the whole experience is seamless to the end user. It's also possible to put all three versions of the binaries into a single installer that's smart enough to know which to install, once again making it transparent to the end user.

Yes, the <i>source code</i> would require processor-specific coding. Branches in the code like that are however already pretty commonplace for those who optimize their code and/or write compiled code for multiple platforms. (Both of which are slowly growing concepts.) The actual consumer however would never even need to understand these technical differences. And Intel has the clout to force people into learning how to do this <i>and</i> the resources to design compilers that'll make a lot of it very easy (and possibly even seamless to the software engineer so long as they don't delve into low-level coding).

Quote:
Are you kidding me? Let's see, release a game 6-months earlier or release a game that runs 1.5x faster......
The market for games is obvious. He who gets the games out faster sells more.

Wow. Can you tell me how to get to your world? In the world that <i>I</i> live in software companies do that very rarely and only when they know a competitor is working on a similar concept to their software. Meanwhile the vast majority of the software titles progress at their own pace and even <i>delay</i> launches if need be.

Quote:
The fact of so few games out there that even use SSE to the extent that they could should contest to that. There's high-level optimization, yes, but what game developer is going to spend an extra 6-months, hire assembly programmers, and have them optimize their huge code-base for SIMD?

That has nothing to do with time. That has everything to do with finances. Most game development companies don't have the money to hire people to squeeze every last drop of performance out of the code possible. In the end it comes down to a matter of priorities that usually gets weighed in the end by how much money they can spend on development and where best to spend that money.

Quote:
High-level optimizations through common windows/dx calls, yes. With help from the manufacturers even using some programming interfaces like Glide, however, *never* low-level optimizations on the graphics card itself. Driver updates later on brings better low-level optimizations.

Are you kidding? It's only in the last few years that software has aimed primarily at high-level optimizations over low-level optimizations. In the past it was a necessity. Fortunately graphics cards are aiming more at following DX standards now at least so it isn't so drastic of a difference between an OpenGL performance and a card-specific library now. But then again ATI and nVidia with their differences in shaders have also forced a lot of lower level branching again because to them DX is not enough. Basically low-level optimizations for specific graphics cards have been common in the game industry for as long as 3D graphics have been a part of that industry. The idealism of the higher order libraries like OGL and DX were <i>supposed</i> to negate that... but they never <i>actually</i> managed to.

Quote:
How many fully utilize SSE? How many today even fully utilize the DX8 engine in most DX8 cards? How many today even fully utilizes the hardwired T&L engine in DX7 cards?

As many as have these as minimum requirements. Again, games <i>take so long to develop</i> that the underlying engine is usually not oriented towards the latest advancements. HL2 and D3 are the first massive engine rewrites in forever. There are still tons of games coming out that are using modified Q3 engines. Just because the game itself is new doesn't mean that the game's engine is. And again, how much money is gained from 'fully' utilizing compared to the cost to write the code to 'fully' utilize it? This is why so many partial utilizations exist. They're quicker and easier and give almost-as-good results. That said, they're still using them in some manner. Just because that manner wasn't to your idealism doesn't mean that it wasn't used at all.

Quote:
Support for as many pieces of hardware, perhaps, utilization of that hardware? Hardly. We've only had a few engines out there that even remotely come close to having levels of optimization found in programs such as Lightwave or Photoshop. The Quake3 engine being one.

Id is definately one of the best companies at optimizing. (Though EA is supposed to be pretty darn good too.) But even still an awful lot of companies <i>will</i> put in code to utilize specific pieces of hardware better than the standard support that they use to just passably run others.

The thing is though for how many years has Photoshop been around? It's pretty easy to add and fine-tune optimizations to a product that's around for a long time. Games however come and go rather quickly, so there's hardly ever a reason to go back to a game engine's source code and optimize it further after the product has already been released. But how many versions of Lightwave and Photoshop have been released now? How many code maintenance lifecycles have those products seen?

Quote:
MMX frenzy? Do you remember MMX? There was so much hype, then programmers took one look at it, most stayed far away from it.

Man, I'd <i>really</i> like to know how to get to your world just to study the differences there. In my world however there were an <i>awful</i> lot of games that openly advertized their usage of MMX. In fact I had to upgrade from my Pentium 133 because it got to the point where games no longer supported any processor without MMX. If that's "most stayed far away from it" then I must have had one hell of a tunnel-vision when choosing games.

Quote:
3DNow! optimized games? The one or two that exist? There's Quake2 and......FF7? And 3DNow! was huge back when it first came out (technically speaking). It had great potential compared to the P2's x87 FPU. For such a technical feat, the adoption could be called nothing more than little.

Which is exactly my point. It wasn't targetted as well as it should have been because there wasn't enough AMD market share to give anyone a financial reason to optimize for it when they could spend the resources optimizing on something that would affect far more of their customers. It's <i>never</i> been about the superiority of the technology. It's <i>always</i> been about market share and using what resources you have to reach the most customers so that you have the largest possible source of income. Those who target specifically to certain enthusiasts often end up with <i>only</i> those enthusiasts as a market. Support as many platforms as possible and specifically target the most commonly used of those platforms though, and <i>BAM</i>, money.

Quote:
As for "support" that is quite a different thing than "utilize". If I wrote a program write now, did absolutely no optimizations, downloaded ICC, used the -qax flag, my code "supports" SSE2. Is it properly optimized? not by a long shot. To do so, I'd have to delve into assembly and learn the intricacies of SIMD programming.

You're absolutely right. Take that one step further though. You could spend the next eighteen months learning all of the intricacies of SIMD programming and then make your utilization absolutely flawless and your code so optimized that you couldn't possibly make it any better ... <i>or</i> you could spend the next three months learning the basics of SIMD and writing code that's a lot better than it was without SIMD but definately not perfect. <i>That's</i> utilization. Again, just because it isn't utilized to your liking doesn't mean that it isn't utilized in a beneficial way that at the time made the best use of the resources available to the engineers. It's not all or nothing. It's none or something.

Quote:
Photoshop 7 was written and released on Windows *first* and then *ported* to OSX months later. Looking at the Photoshop filter results from digitalvideoediting.com and Aceshardware, it shows that it's very well optimized for the Pentium 4 and SSE2.

I'll believe that any version of Photoshop was completely written from scratch for x86 the same day that I believe Microsoft has fully disclosed Windows API. No offense but just because the P4 has been ramping in speed and IPC both at a surprising (but enjoyable) rate and is <i>finally</i> kicking some arse while the Mac all but completely stagnated for too long to even survive doesn't mean that the code was actually written from the ground up for a P4 just because it runs well on a P4. If you have substantial proof of Adobe giving preference of a PC over a Mac I'd dearly love to see it. I readily admit that I don't research into such things much anymore because, well, frankly, for years Adobe never showed any indication whatsoever that they'd ever change their attitude towards PCs.

Quote:
Hmmm, let's see, how is it optimized for the P4. Let's look at results of the K7 (no SSE2) vs the K8 (with SSE2). Oh wow! Look at that, more gain than in almost any other program. A software company that actually spent the time to hire assembly SIMD programmers to optimize their code for SIMD and......gained a large performance boost. Damn them for embracing new technologies unlike the rest of software out there that hasn't even heard of SSE2, damn those bastards that focus too much on the Pentium 4.

See, now you're just being a dick. I never once said that there was anything wrong with targetting a specific platform. In fact I said quite the opposite. I said that's <i>exactly</i> what software companies do is either target the optimizations which affect their majority market or target no specific optimizations at all. Just because you proved yourself wrong doesn't mean that you have to be a dick. You don't have to turn debating into childish arguing you know.

Quote:
In many ways, I'd argue that the introduction of SIMD FP instructions was more of a revolution than 64-bit is. SIMD had the potential to increase performance by magnitudes in operations that are used today (as we saw in Quake 2 that was optimized for 3DNow! and probably the only example there is). 64-bit's performance increase is limited to encryption and some scientific applications. Oh, and the added 4 GB of memory. It's a shame 3DNow! didn't get as much credit as it deserved. SSE may be technically superior, but it was a year late.

I won't debate that any. I completely agree with you here. SIMD had (still has frankly) a <i>lot</i> more potential for improving performance than 64-bit GPRs do. And it would have been great to have seen 3DNow! gain more acceptance. Too bad AMD just couldn't grab enough market share to make it worth the effort for most people to optimize for. And now that AMD supports SSE (and now with K8 SSE2) there's no reason at all to support 3DNow!. That's just life. I mean look at Itanium. Personally I think that EPIC is a concept which really kicks arse, but because of market share it's been a real fight for Intel. I bet in time even AMD will take a good hunk out of that market share with K8 just simply beacuse it <i>isn't</i> EPIC.

And yet we work on making quantum PCs a reality beacuse they offer the same benefits. (But with theoretically even more potential.)

That's also one reason why Linux isn't picking up. Software development isn't being targetted at it because it's still got such a low market share. Never mind that it has a much lower overhead performance penalty than Windows and is more secure.

As I said, it's all about best using limited resources with the most ecconomic choices by targetting the largest market and picking your biggest low-cost improvements, not about who's actually better and making software's performance the absolute best that it could be. No market is driven by idealism.

Quote:
You're buying a chip now for the purpose of running software 4 years from now (which may or may not be compatible with your chip considering it may be IA-64 software) at incredibly slow and unbearable speeds? Great, I'm so glad I bought a 386 instead of a 286 back in the day, now I can run WinXP on it and play Doom3!

**ROFL** Damn straight! He he he. It <i>is</i> a pretty funny way of looking at it. It's be buying a GeForce FX 5200 for it's DX9 support. Heh heh.

Quote:
As for 64-bit versions of Linux and Windows....tell me what the benefit of those are vs my 32-bit windows right now? By the time I need a 64-bit chip, I'll buy one.

I couldn't agree more. So many people are sold on the 64-bit hype and don't even understand what (if any) advantage they get with the 64-bitness. I won't even consider a 64-bit upgrade myself until I finally have necessary software that requires and/or performs extremely better in 64-bit than in 32-bit. And that is likely to be many years away.

<pre><A HREF="http://ars.userfriendly.org/cartoons/?id=20030905" target="_new"><font color=black>People don't understand how hard being a dark god can be. - Hastur</font color=black></A></pre><p>
September 18, 2003 4:29:57 PM

Quote:
So you're saying Intel is going to wait on Microsoft, and all the other software companies that are already ready or have begun to support AMD64 to rewrite yet another version of windows, and other software rather than implement AMD64 themselves?

Damn straight. I'll yell it from the mountain tops. I'll spam innocent AOLers about it. I'll stand on my head and sing it to the tune of 'Bullet With Butterfly Wings'. Whatever. But most definately, that's what I'm saying.

If there is one thing that Intel is known for, it is for pushing it's own standards above anyone elses no matter how much better those other standards may be or how much longer they've been in use. <i>That</i> is Intel. They drive the industry to exactly where they want it to be or spin their wheels trying. If you're lucky, at absolute best Intel will provide AMD64 compatability while pushing their own 'superior' x86-64 and/or IA64 flavour.

Quote:
The server market did not accept Itanium well, however that is not the case with Opteron. BIG engineering software is being ported to AMD64 and companies like the opteron/AMD64 since it isn't as big a risk as Itanium due to 32bit compatibility that exists NOW.

Name one release of Itanium that doesn't run IA32 code.

Quote:
If intel wanted an IA64/x86 cross breed, they are long overdue unless they already have one finished.

Overdue or just that they don't want the market to go in that direction yet? I say the latter. There's absolutely no need for 64-bit integers to be the baseline yet and Intel knows it. And they probably <i>do</i> have their own 64-bit SOHO plans, but they're not about to implement them until there's an actual reason to. Unacceptable loss of market share because of the misperceptions of the SOHO market will probably end up being that reason long before an actual need for home 64-bit processing is.

<pre><A HREF="http://ars.userfriendly.org/cartoons/?id=20030905" target="_new"><font color=black>People don't understand how hard being a dark god can be. - Hastur</font color=black></A></pre><p>
September 18, 2003 6:04:19 PM

-------------------------------------------------------
32-bit computing will probably dominate for the next 4 years
------------------------------------------------------
Why you think like so?4 years is too much time.Don't you think developers(especially gaming) will utilize the sheer perfomance advanage they will get from a 64 bit platform.After all an optimized 64 bit platform is really faster than a 32 bit platform.
September 18, 2003 7:49:14 PM

Quote:
Why you think like so?4 years is too much time.Don't you think developers(especially gaming) will utilize the sheer perfomance advanage they will get from a 64 bit platform.After all an optimized 64 bit platform is really faster than a 32 bit platform.


Depends on which 64-bit platform and which 32-bit platform. Again, the majority of code has no room to expand to 64-bit support. They simply aren't running into the integer barrier or the memory barrier.

"We are Microsoft, resistance is futile." - Bill Gates, 2015.
September 19, 2003 1:26:29 AM

Any game that asks for over 4.096 Billion points in score must be truly dumb to not use smaller high-digit ratios. Instead of starting with 1000 points per hit, start with 10, for christ's sake, nothing is gonna change really.

--
<A HREF="http://www.lochel.com/THGC/album.html" target="_new"><font color=blue><b>Are you ugly and looking into showing your mug? Then the THGC Album is the right place for you!</b></font color=blue></A>
September 19, 2003 6:51:51 AM

Well to the people who think 32 bit Just fine Ok stay in your 32 bit world. But here is a Fact. Faster we get into 64 bit the faster we will have 64 bit apps. So That Dumb Intel's Chief Technology Officer Needs grow up. And let tech go on. I dont want Intel waiting 10 More years for 64 Bit and saying we dont need it right now. Agree as much as you want With intel Kinney. But face the Facts I want the CPUs going as fast as they can and 64 bit will help. And having Amd forcing Intel to going into 64 bit will help both Intel and Amd. Games will look better and softwere will run faster.
September 19, 2003 1:25:41 PM

AtolSammeek, what <i>exactly</i> do <i>you</i> need from 64-bit software that 32-bit software <i>cannot</i> provide?

<pre><A HREF="http://ars.userfriendly.org/cartoons/?id=20030905" target="_new"><font color=black>People don't understand how hard being a dark god can be. - Hastur</font color=black></A></pre><p>
September 19, 2003 1:37:31 PM

he no doubt hopes it will enlarge his e-penis. :smile:

---
<font color=red>The preceding text is assembled from information stored in an unreliable organic storage medium. As such it may be innacurate, incomplete, or completely wrong</font color=red> :wink:
September 19, 2003 3:52:37 PM

Quote:
How many fully utilize SSE? How many today even fully utilize the DX8 engine in most DX8 cards? How many today even fully utilizes the hardwired T&L engine in DX7 cards?


Ask yourself this: How many consumers out there own video cards that fully support DirectX 9? Hell, I know a few Compaq owners that are still running TNT2 cards. Why would you, as a game developer, fully support a standard that most users couldn't possibly take advantage of? The majority of people are still running DirectX 7 or 8 cards. That is why the engines are not fully supported. They have to be backwards compatible with all the old crap that's still out there. Writing software to fully support each version of DX is obviously time-consuming and therefore costs money. Something gaming companies don't always have a lot of.

<font color=red> If you design software that is fool-proof, only a fool will want to use it. </font color=red>
September 19, 2003 6:53:01 PM

Now I know why Genetic Weapon said you are, "the dumbest person on this forum".
You and eden should go read ars technica and play with each others buttholes.

Athlon 1700+, Epox 8RDA (NForce2), Maxtor Diamondmax Plus 9 80GB 8MB cache, 2x256mb Crucial PC2100 in Dual DDR, Radeon 9800NP, Audigy, Z560s, MX500
!