Sign in with
Sign up | Sign in
Your question

Where's P4Man?

Last response: in CPUs
Share
July 18, 2004 9:43:07 PM

I really want to see his face when he reads that DOOM3 has nothing 64bit or ever intended to build around 64bit specifically x86-64.

Now all thats left is to see if Valve will actually build anything around it. With considerations that I have heard nothing about x86-64 builds from Epic or UbiSoft I doubt very much well see it.

Xeon

<font color=red>Post created with being a dickhead in mind.</font color=red>
<font color=white>For all emotional and slanderous statements contact THG for all law suits.</font color=white>

More about : p4man

July 18, 2004 11:46:09 PM

Farcry on the other hand...
July 19, 2004 12:46:14 AM

Fact is game developers don't really need 64bit processors as SSE/2/3 extensions give 64bit functionality.


===========================
<A HREF="http://www.clan-chaos.com" target="_new">clan CHAOS</A>
Related resources
July 19, 2004 3:03:01 AM

Hence why I will always dismiss the validity of x86-64, it is a joke and always will be a joke. Call it what it is 16 64bit general purpose registers.

Xeon

<font color=red>Post created with being a dickhead in mind.</font color=red>
<font color=white>For all emotional and slanderous statements contact THG for all law suits.</font color=white>
July 19, 2004 3:15:25 PM

muahahahaha... what a funny and misleading statement!!!

Here you have some "jokes" O_o

SuSe Linux 9.1 AMD64 x86-64 Vs. x86-32
--------------------------------------

Apps (64 bits compiled):

-Lame 3.96: 64 bits version 31% faster than 32 bits version.

http://www.anandtech.com/linux/showdoc.aspx?i=2127&p=2

-Pov Ray 3.50: 64 bits version 28% faster than 32 bits version.

http://www.anandtech.com/linux/showdoc.aspx?i=2127&p=3

-MySQL 4.0.20d: 64 bits version 25% and 11% faster than 32 bits version.

http://www.anandtech.com/linux/showdoc.aspx?i=2127&p=5


Games:

<b>-Epic Unreal Tournament 2004 64 bits version</b>: 29% faster than 32 bits version O_o

http://www.anandtech.com/linux/showdoc.aspx?i=2127&p=4

And

-Wolfenstein Enemy Territory: 32 bits in 64 bits OS 12% faster than 32 bits in 32 bits OS.

http://www.anandtech.com/linux/showdoc.aspx?i=2127&p=4



yeah a "joke" hahahaha...
July 19, 2004 3:16:58 PM

True, I hadn't though of that... Doom 3 has no extras whatsoever for people who own x86-64 systems. This is of singular importance right now, as Doom-3 technology will be the driving force for the next few years as far as technology goes - it is the single most demanding application that everyone wants to use. All others have much more limited appeal!

<i>Edit: As for the previous post: I'm far, far more interested in Doom 3 than any of your other benchmarks, sorry. And I think I'm not alone.</i>

<i><font color=red>You never change the existing reality by fighting it. Instead, create a new model that makes the old one obsolete</font color=red> - Buckminster Fuller </i><P ID="edit"><FONT SIZE=-1><EM>Edited by Mephistopheles on 07/19/04 02:19 PM.</EM></FONT></P>
July 19, 2004 3:22:19 PM

OK, stupid question:

<b>Why</b> should 64-bits be faster than 32-bit?

<i><font color=red>You never change the existing reality by fighting it. Instead, create a new model that makes the old one obsolete</font color=red> - Buckminster Fuller </i>
July 19, 2004 3:24:38 PM

I don't know. But it is.
July 19, 2004 3:27:23 PM

Quote:
Why should 64-bits be faster than 32-bit?

Because 64 is bigger than 32, of course :wink:

---
Epox 8RDA+ V1.1 w/ Custom NB HS
XP1700+ @200x10 (~2Ghz), 1.4 Vcore
2x256Mb Corsair PC3200LL 2-2-2-4
Sapphire 9800Pro @412/740
July 19, 2004 4:00:14 PM

Some data manipulation are faster in 64bit mode, simple!

If you want to process a big chunk of data and have the choice of messing with them in a roomier place, you have good chance to do it faster! But, on the other hand processing might take longer, for example : applying AND/OR "filter" on 2 binary value of 32bit might be slower in 64bit mode than in 32bit mode... But we can reverse it, doing AND/OR on 64bit long chunk of data would require 2 operation in 32bit and only 1 in 64bit mode.

It's like saying why having more RAM impact your system performance. Therorically calculating/interpreting machine code have nothing to do with ram, it's only moving and comparing BITS in registers...

Well... I stop here, I'm not a micro-processor architect, so I don't know want to be flamed by saying too much waeird stuff!! :smile:

--
It's tricky to use words like <b><font color=green>AMD</font color=green></b> or <b><font color=blue>Intel</font color=blue></b> in a signature some users could think your are biased.
July 19, 2004 4:58:02 PM

Hm, this is still not a thorough reason for 64-bit to be faster than 32-bit-compiled code. Even more so if it's exactly the same - take POV-Ray, for instance. How come rendering exactly the same frame in 64-bit mode is faster than in 32-bit mode?...

<i><font color=red>You never change the existing reality by fighting it. Instead, create a new model that makes the old one obsolete</font color=red> - Buckminster Fuller </i>
July 19, 2004 5:20:17 PM

It's like I just said... Doing some stuff in 32bit mode require X operation due to 32bit limit. In 64bit these operations requires less "pass", so they will be done faster.

It's like having an 8 digits calculator to do 16 digits calcutation, to do so, you have to use memory and do more "pass" before gettign a result. If you had a 16 digits calculator, you would have done this calculation in 1 pass!

Am I clear?

--
It's tricky to use words like <b><font color=green>AMD</font color=green></b> or <b><font color=blue>Intel</font color=blue></b> in a signature some users could think your are biased.
July 19, 2004 5:24:19 PM

What I think Meph is asking particularly, is how would 64 bit speed up purely 32-bit operations...

To use your calculator analogy, how would your 16-digit calculator help speed things with a calculation that still used only 8 digits?

primarily I think he was wondering about EugeneMc's Statement:
Quote:
-Wolfenstein Enemy Territory: 32 bits in 64 bits OS 12% faster than 32 bits in 32 bits OS.

which implies that 64-bit is helping standard 32-bit code, whereas it simply can't be :eek:  .

<pre>[Edit]Besides... I think Meph is actually quite clued up anyway...</pre><p>
---
Epox 8RDA+ V1.1 w/ Custom NB HS
XP1700+ @200x10 (~2Ghz), 1.4 Vcore
2x256Mb Corsair PC3200LL 2-2-2-4
Sapphire 9800Pro @412/740<P ID="edit"><FONT SIZE=-1><EM>Edited by ChipDeath on 07/19/04 06:25 PM.</EM></FONT></P>
July 19, 2004 6:55:52 PM

Quote:
which implies that 64-bit is helping standard 32-bit code, whereas it simply can't be.

OK, here is another clear picture! I wish!

GAME CODE -> OS/Drivers -> Hardware

If the OS/Drivers have 64bit optimisations and sup^port, a 32bit software running on top of it can benefit from it. Some call to hardware or OS made by the GAME can be "translated" or executed with 64bit instructions without problems!

This explain a 32bit on 64bit performance increase!

--
It's tricky to use words like <b><font color=green>AMD</font color=green></b> or <b><font color=blue>Intel</font color=blue></b> in a signature some users could think your are biased.
July 19, 2004 7:27:32 PM

Quote:
To use your calculator analogy, how would your 16-digit calculator help speed things with a calculation that still used only 8 digits?

You are right, the 16digit calculator will not speed anything inthis case. But when you put this in perspective, if 50% of your calculation need only 8digits but the other 50% needs 16digits. You will gain performance overall, because the calculations made on the 16digits will be done faster. This is how you can improve performance with 64 bit...

Little maths :
you want to do 22 x 45 with a 2 digit calculator, you you can do it...

You need to decompose the equation
2 x 5 = 10
2 x 5 = 10 (shifted by 1) = 100
2 x 40 = 80
2 x 40 = 80 (shifted by 1) = 800

10 + 100 + 80 + 800 = 990 (this can't be done directly on the 2 digit calcultor, so you need to do even more maths. transformation to get your result)

With a 4 digit calculator:
22x45 = 990

I know, my explanation is not scientifically correct, but it explain how 64bit computing can speed up quite much certain type of calculation. And the step for 8 to 16 and 16 to 32 bits were working the same way!

--
It's tricky to use words like <b><font color=green>AMD</font color=green></b> or <b><font color=blue>Intel</font color=blue></b> in a signature some users could think your are biased.
July 19, 2004 10:57:43 PM

Quote:
I know, my explanation is not scientifically correct, but it explain how 64bit computing can speed up quite much certain type of calculation. And the step for 8 to 16 and 16 to 32 bits were working the same way!

Use SSE2 it's able to chew at 64bit variables plus it's vectorized.

Xeon

<font color=red>Post created with being a dickhead in mind.</font color=red>
<font color=white>For all emotional and slanderous statements contact THG for all law suits.</font color=white>
July 20, 2004 12:36:54 AM

Quote:
Why should 64-bits be faster than 32-bit?

Hmmmm.

Speaking from the perspective of someone who's done some 3D engine coding...

The rendering engine almost never actually needs a 64-bit integer quantity. 3D rendering engines have gotten by without that for a long time, and it's a bit difficult to think of a way 64-bit integers can be put to use.

Mind you, there was an old trick called fixed-point math that got used in the 386/486 days, when the x87 FPU was still slowish and rather unavailable. Basically 32-bit integers got taken and treated as floating-point numbers, except that the characteristic was assumed to be 16, the sign bit was thrown away, and the entire 32 bits was devoted to the mantissa in two's-complement notation. It also meant a lot of the comparatively complex FPU ops could be replicated with simple integer ops and bit-shifting. Theoretically it could be extended to 64 bits for greater range and precision.

Fixed-point math was a bit of a hack though. I don't remember ANY compiler that supported it directly (you had to write your own assembler routines for it), and it became an obsolete hack once the Pentium went mainstream with its built-in, parallel-pipelined FPU.

I suppose artificial intelligence might get a boost from 64-bit. I'm not qualified to say, though, as I haven't taken any AI courses yet.

Obviously games don't need more than 2GB address space either. Game design engines might (Epic says we're reaching that point now), but not the game itself.

Everything else...I suppose is down to the extra GPRs. x86 has been register-starved for ages. MMX/SSE/SSE2 kind of helps, but it would have helped <i>more</i> if the new registers were more tightly coupled with the standard GPRs. Shuttling data between three or four different sets of registers is kind of a drag.

<i>"Intel's ICH6R SouthBridge, now featuring RAID -1"

"RAID-minus-one?"

"Yeah. You have two hard drives, neither of which can actually boot."</i>
July 20, 2004 6:47:44 AM

A simple analogy might help. You have a floating stock of 28 cubic units. You have a choice. You can use a 32 cubic unit structure or a 64 cubic unit structure. Which is better?
With the 64 unit structure, you can have more workers accessing the stock ( extra gpr, fp and sse2 registers). Once everything is set up the spacing between, allows easier access, with fewer mistakes.
With the 32 cubic unit structure, you would always have stuff waiting to be stacked, because if you pack things too tightly, you cant find them. You have items that take up more space than they need, and yet you still have that big problem of getting the wrong piece sometimes.
July 20, 2004 9:26:16 AM

I fully understand it helps with 64-bit maths. It's just the claim that <i>purely</i> 32-bit stuff will run faster on a 64-bit enabled OS that I find difficult to believe. Either <i>something</i> - either in the game or drivers - is taking advantage of native 64-bit, or the OS is simply better optimized somehow. The former would mean you can't say it's an example of 32-bit being faster "because of 64 bit", and the latter would be optimizations that are possible on a native 32-bit system anyway.

---
Epox 8RDA+ V1.1 w/ Custom NB HS
XP1700+ @200x10 (~2Ghz), 1.4 Vcore
2x256Mb Corsair PC3200LL 2-2-2-4
Sapphire 9800Pro @412/740
July 20, 2004 11:21:07 AM

I've abroad for a couple of weeks, nice to see you missed me.

>I really want to see his face when he reads that DOOM3 has
>nothing 64bit or ever intended to build around 64bit
>specifically x86-64

So Doom3 is launched ? Finally, it was only due 2 or 3 years ago. i'll look up some reviews soon, but suffice to say I'm not suprised *at all*. No one ever claimed ID would make a 64 bit port. What is the big deal about that ? Doesn't it run on 64 bit windows or linux ?

>Now all thats left is to see if Valve will actually build
>anything around it.

Not for HL2 AFAIK. Unreal, Far Cry and AA otoh..
Anyway, got lots of reading to do first.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
July 20, 2004 11:49:36 AM

where is the doom3 demo? hope it comes out before the retail game.

Prescott 3.0E 1MB L2 HT
1GB PC 3200 Dual channel(PAT)
Asus P4P800 Bios 1016
PNY Geforce 6800 GT 256MB DDR3
56,064 Aquamarks
July 20, 2004 12:18:13 PM

Quote:
Use SSE2 it's able to chew at 64bit variables plus it's vectorized.

I know that SSE2 can do 64bit, but the AMD64 offer a "native" 64bit computing power. I'm not sure that SSE2 64bit viaribles transformation is faster than AMD64 computation on 64bit. SSE2 might only simplify programming but in the internal CPU structure it could "decompose" the transformation to make it fit in the CPU registers.

--
It's tricky to use words like <b><font color=green>AMD</font color=green></b> or <b><font color=blue>Intel</font color=blue></b> in a signature some users could think your are biased.
July 20, 2004 12:22:07 PM

>I know that SSE2 can do 64bit, but the AMD64 offer a
>"native" 64bit computing power

AFAIK, in AMD64 "long mode" (ie, 64bit mode), you get twice the number of SSE2 registers. This should help SSE2 enabled apps quite a bit.

Quick question: have there been any 64 benchmarks on intel Xeons ? Like I said, I've been away for a while.. Xeon 64 is launched, or not ?

= The views stated herein are my personal views, and not necessarily the views of my wife. =
July 20, 2004 12:24:52 PM

Quote:
It's just the claim that purely 32-bit stuff will run faster on a 64-bit enabled OS that I find difficult to believe.

I don't find it hard to believe. First Windows XP 64bit is NOT a pure 64bit OS. It still based on 32bit. I'm sure Microsoft did not recompile/recode everything. They probably focused on tasks where improvements could be done (memory management, hardware to OS "handshacking", etc..).

This is why 32bit stuff benefits from 64bit OS, the 32bit apps still runs at their full potential in 32bit (no emulation or translation needed) and the OS/Drivers layers get a boost from 64bit optimisation.

--
It's tricky to use words like <b><font color=green>AMD</font color=green></b> or <b><font color=blue>Intel</font color=blue></b> in a signature some users could think your are biased.
July 20, 2004 12:27:09 PM

Good to see you agin P4Man!

No, the Xeon 64 is not yet out as far as I know, we heard a it, but we still wait to see it!

Do you know that LGA775 is out and not that impressive? :smile:

--
It's tricky to use words like <b><font color=green>AMD</font color=green></b> or <b><font color=blue>Intel</font color=blue></b> in a signature some users could think your are biased.
July 20, 2004 12:35:52 PM

You're quite wrong here, on a number of accounts..

>first Windows XP 64bit is NOT a pure 64bit OS

It is afaik. It is based on NT for Itanium, which doesnt know any '32 bit' at all. Its as 64 bit as an OS can be.

> I'm sure Microsoft did not recompile/recode everything

I'm sure they did (at least recompile everything, and recode the kernel), because an A64/Opteron would have a hard time executing IPF instructions :)  Now it *could* be some parts are still 32 bit based -perhaps only in beta- for those pieces of code that are missing in Windows for Itanium, but even there I doubt it.

> They probably focused on tasks where improvements could be
>done (memory management, hardware to OS "handshacking",
>etc..)

Those are the parts they had to rewrite; the rest has got to be recompiled at least.

>This is why 32bit stuff benefits from 64bit OS, the 32bit
>apps still runs at their full potential in 32bit (no
>emulation or translation needed)

Not true. In fact, 32 bit apps *do* need an emulation layer (64 bit WoW) to exectute. The cpu doesnt require any emulation, but windows does if you want it to execute 32 bit code and API's. I'm not sure how Linux or other OS's handle this, but I assume its the same. I remember 64 bit SuSE (or another distro, don't recall) not supporting 32 bit apps yet.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
July 20, 2004 12:38:45 PM

>Do you know that LGA775 is out and not that impressive?

Yeah I knew that.. I've known it for a year at least :) 

= The views stated herein are my personal views, and not necessarily the views of my wife. =
July 20, 2004 1:09:49 PM

First, I must that I love you!!! I mean you are the kind of person that makes me happy! I learn a lot of stuff from you!

I'm a lot more on the hardware side of the PC world. I can a lot of stuff with the hardware, I studied electronics and telecomm. So, I understand every aspect of hardware architecture. But when it comes to software and micro-architecture, I'm more confused and I do a lot of guess work! You are there to light me path to a better computer understanding.

My software knowledge is limited to Visual Basic (and only for the "basic") and PHP/MySQL that I really like to build dynamic web page. Most of my PHP/MySQL work is done for my jobs Intranet, so I can't show you how good I am!!! :smile:

Back to 64 bit...

Quote:
It is afaik. It is based on NT for Itanium, which doesnt know any '32 bit' at all. Its as 64 bit as an OS can be.

Humm... But, the architecture of the Itanium is quite different of classic x86 architecture. I mean the AMD64 is to great extent based on the Athlon architecture. From your explanation, I understand that Microsoft used a big chunk of the Windows for Itanium code, but they compiled it for AMD64???

Quote:
Not true. In fact, 32 bit apps *do* need an emulation layer (64 bit WoW) to exectute. The cpu doesnt require any emulation, but windows does if you want it to execute 32 bit code and API's.

So, this would explain why 32bit apps are very on Windows 64 for AMD64, because they only need a "little" software emulation. Because, this software emulation cripple any 32bit apps on Itanium systems.

Quote:
I remember 64 bit SuSE (or another distro, don't recall) not supporting 32 bit apps yet.

But, at least, Linux comes with it's own compiler, which make it a lot more easier to port apps to native 64bit LINUX code. I doubt that Microsoft will ever give their source and/or compiler for this purpose!

Have a good day! And continue to teach me! I am an information hungry!

--
It's tricky to use words like <b><font color=green>AMD</font color=green></b> or <b><font color=blue>Intel</font color=blue></b> in a signature some users could think your are biased.
July 20, 2004 1:37:37 PM

Quote:
Yeah I knew that.. I've known it for a year at least :) 

Damn it! I can't Scoop you!!!

Maybe for some Sempron infos :
<A HREF="http://www.theinquirer.net/?article=17314" target="_new">http://www.theinquirer.net/?article=17314&lt;/A>

I like the idea of Sempron vs Celeron D rating, but this might confuse potential Athlon 64 buyer... Sempron 3100+ vs Athlon 64 2800+ which one is faster? But, they will play the Intel game. Many customers thought that a Celeron 2.6GHz was faster than a P4 2.4GHz.

This little article expose some potential "problems" for AMD :
<A HREF="http://www.overclockers.com/articles1068/" target="_new">http://www.overclockers.com/articles1068/&lt;/A>

How AMD can ramp up AMD64 sales? With the Sempron that target low-end market at low price. AMD might have hard time sellign AMD64, since S754 will be only low/mid-end. And S939 don't have anything at "mid-cost".

It gives potential buyer like me hard time! If I go S754 today, I will be limited to Athlon 64 3700+ (which is not problem for at least 1 year), but I don't have PCI-EXPRESS support??? And, if I want to go the S939 route, I will spend big money and I will not have PCI-EXPRESS support neither! This bug me a lot!

On the other side, I could go Intel, but I must pay a premium for PCI-EXPRESS and DDR2. And LGA775 processor are not yet 64bit ready! Damn it!

SO, here is my options :

1. Keep my old Athlon XP platform and buy a decent GPU. Radeon 9600/9800PRO would be nice and they should be able to give me at least 1 year of good gamign experience.

2. Get a cheap S754/Athlon 64 2800+ based system with a Radeon 9600/9800PRO. Then change my MB/GPU in about 1 year to get a good PCI-EXPRESS based GPU at a decent price. But, I would be stuck with S754.

3. Get a S939/Athlon 64 3500+ based system with a Radeon 9600/9800PRO that will cost me much more. And upgrade MB/GPU in a year to get a good PCI-EXPRESS based GPU at a decent price. I would then still have a decent CPU since AMD/Intel seems to be "stuck" in their CPU speed ramping.

4. Get a pricy LGA775/P4E 2.8GHz based system and change to a 64bit CPU in 1 year or so. But, this would cost me a lot for no bonus at all. I would pay only for new tech that don't give any much for now.

The other factor that bug me is when AMD will switch to DDR2 or DDR3 support? In 1 year or 2 year? I think, they might skip DDR2 because DDR3 will probably be available by the time they will "upgrade" their platform.

AMD should have done only 2 socket : they S939 and S940. S940 for Opterons (servers) and S939 for mass market, and they should have made 1 and 2 memory channel CPU for S939. This is possible since single channel memory controller would use less pin. We would have only 1 motherboard type with marked memory slot for single and dual channel use.

I stop here!

--
It's tricky to use words like <b><font color=green>AMD</font color=green></b> or <b><font color=blue>Intel</font color=blue></b> in a signature some users could think your are biased.
July 20, 2004 1:53:27 PM

>Humm... But, the architecture of the Itanium is quite
>different of classic x86 architecture. I mean the AMD64 is
>to great extent based on the Athlon architecture. From your
>explanation, I understand that Microsoft used a big chunk
>of the Windows for Itanium code, but they compiled it for
>AMD64???

Thats what I recall. NT (and more recent versions) are mostly written in C. All you need is a compiler :)  Well, thats a bit exagerated of course, but you get the idea. Windows is not less portable than say Linux. Obviously, the kernel needs more than just a recompile, wether that is a Linux or Windows kernel.

>So, this would explain why 32bit apps are very on Windows
>64 for AMD64

I think you missed a word there, not sure what you are asking..

>But, at least, Linux comes with it's own compiler, which
>make it a lot more easier to port apps to native 64bit
>LINUX code. I doubt that Microsoft will ever give their
>source and/or compiler for this purpose!

And why would/should they ? There is always GCC if you want it free..

= The views stated herein are my personal views, and not necessarily the views of my wife. =
July 20, 2004 1:58:29 PM

Quote:
>So, this would explain why 32bit apps are very on Windows
>64 for AMD64

I think you missed a word there, not sure what you are asking..

I wanted to say that there is less software emulation needed for the AMD64 to run 32bit apps. (because of the x86 based architecture). This explain why AMD64 CPU can run 32bit at much faster than Itanium that need much more emulation to interpret/run 32bit code.

--
It's tricky to use words like <b><font color=green>AMD</font color=green></b> or <b><font color=blue>Intel</font color=blue></b> in a signature some users could think your are biased.
July 20, 2004 2:09:38 PM

>I like the idea of Sempron vs Celeron D rating, but this
>might confuse potential Athlon 64 buyer... Sempron 3100+ vs
>Athlon 64 2800+ which one is faster?

Well, I know either will be faster than a 3 GHz Celeron :) 
Has it already been made clear if Sempron for socket A is just a AXP, or is it really based on A64 ?


>This little article expose some potential "problems" for
>AMD :
>http://www.overclockers.com/articles1068/

Euh.. Sorry, but I think I can safely ignore Ed's editorials and not miss a whole lot.

>How AMD can ramp up AMD64 sales?

Same way they ramped K6 and K7 sales.. the market for >$200 cpu's just isnt that big. Once smaller, cheaper 90nm parts hit the shelves, they will sell.

>With the Sempron that target low-end market at low price.
>AMD might have hard time sellign AMD64, since S754 will be
>only low/mid-end. And S939 don't have anything at
>"mid-cost".

Haven't seen Sempron prices yet, but I would not expect a big "hole" (if not, even an overlap) between top end sempron and bottom end S754 A64's, and S754 already overlaps with S939.. where exactly would the "mid cost" be ? I don't much like the idea of having different sockets for different segments, but that aside, I see no problem.

>It gives potential buyer like me hard time! If I go S754
>today, I will be limited to Athlon 64 3700+ (which is not
>problem for at least 1 year), but I don't have PCI-EXPRESS
>support??? And, if I want to go the S939 route, I will
>spend big money and I will not have PCI-EXPRESS support
>neither! This bug me a lot!

Don't be bugged, ignore PCI-E. The benefit isnt there (yet?), and by the time you'd want to upgrade your AGP card and not find one anymore, you'll want a new motherboard anyway.

>On the other side, I could go Intel, but I must pay a
>premium for PCI-EXPRESS and DDR2. And LGA775 processor are
>not yet 64bit ready! Damn it!

Ignore DDR2 *and* PCI-E :D 

As for your options, I would either go with 1) (keep the AXP, when it becomes too slow, upgrade it), or 3). Don't see the problem with 3 really. A year from here, chances are you'll still not be too pressed to buy a PCI-E system...

>AMD should have done only 2 socket

Agreed.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
July 20, 2004 2:30:47 PM

>The rendering engine almost never actually needs a 64-bit
>integer quantity

And if it needed it here or there, a 32 bit ALU could process those 64 bit INTs fast enough far anything but something extremely performance sensitive. 64 bit arithemtic is not the main advantage, at least not for games (encoding, encryption,.. otoh)

>Obviously games don't need more than 2GB address space
>either

Sounds a lot like a "no one needs more than 640 Kb" statement to me. Have you tried IL-2 Forgotten Battles ? I did. Ran taskmanager on one display, IL-2 on the other as I kept getting harddisk activity and immensily annoying stuttering on my 512 MB machine. To my surprise, I saw it only took ~3-400 Mb. Then I added a column in taskmanager "VM-size" (translated from Dutch, so it may be called slightly different), and bingo! 1.3 GB. It ran off my harddisk all the time.. IL-2 is not really what you'd call state of the art, ultra high res textures game.. It is however, the best and most realistic WW2 flightsim around (IMHO)

>Everything else...I suppose is down to the extra GPR

The extra GPRs are just the short term benefit.. having (nearly) infinite address space is the real kicker. It will be hard to showcase that though, just like "32 bit" apps/OS's where showcased for their improved multithreading capabilities (background printing whoohoo!) that had nothing to do with the 16-32 bit transition as such. Maybe a 512 MB or 1 GB videocard that will <i>require</i> a 64 bit cpu and OS will open some eyes ? For the others, the extra performance of the 16 GPR's or enhanced SSE2 performance will..

= The views stated herein are my personal views, and not necessarily the views of my wife. =
July 20, 2004 2:45:10 PM

>I wanted to say that there is less software emulation
>needed for the AMD64 to run 32bit apps.

Yeah, i am not even sure "emulation" would be the word here. Its pretty close to the way you run 16 bit windows apps under 32 bit NT/XP. There is an extra software layer involved (Windows on Windows, aka WoW) to translate API's, but the code as such is executed natively by the cpu.

BTW, windows on Itanium also executes 32 bit x86 binaries much the same way, through WoW, and the cpu itselve has x86 compatible execution units built in (so again, no emulation), its just dog slow. So slow it will be replaced with software emulation in fact.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
July 20, 2004 2:48:15 PM

Quote:
Has it already been made clear if Sempron for socket A is just a AXP, or is it really based on A64 ?

I'm confused too about that. I heard in this forum (can't find the post) that the Athlon 64 was designed to be able to run without it's internal memory controller. So, it's possible that Sempron for Socket A will be based on AMD64 architecture. But I doubt it, since all Sempron specs looks like Barton specs. AMD may shift Sempron from Barton to AMD64 as yields and production will ramp up... It's just an idea...

Quote:
Euh.. Sorry, but I think I can safely ignore Ed's editorials and not miss a whole lot.

Why do people hate him? I read some interesting opinion/vision on his website. I don't always agree with him, but I don't hate him neither!

Back to AMD/Sempron/Upgrades...

Finally, the main problem is the AMD Socket galore! This mess everything UP! We can buy 4 different AMD socket today! Damn it! If we forget S940 we still have 3 choices!!!

I will probably go for the first option! Only GPU upgrade (for now)! But, I will wait to see how HL2/Doom3 would perform on a PC like mine. I don't want to waste my money either since I don't play a lot of game, I must choose my GPU wisely, I can't buy a top of the line GPU, this would be too mcuh for the my usage!

If my CPU can't keep up with HL2/Doom3, I will wait a bit and go for the S939 platform, but, this will cost me a lot for CPU power that I don't necessarly need! I want a S939 Atlon 64 2800+ that AMD will never give us.

--
It's tricky to use words like <b><font color=green>AMD</font color=green></b> or <b><font color=blue>Intel</font color=blue></b> in a signature some users could think your are biased.
July 20, 2004 6:08:07 PM

Quote:
I've abroad for a couple of weeks, nice to see you missed me.

Nice and stop telling people that they might think I respect you.

Quote:
No one ever claimed ID would make a 64 bit port.

It also shows he has zero interest in 64bit capably systems, and as a market leader in engine design that goes a long way in my opinion.

Quote:
Unreal, Far Cry and AA otoh

Unreal??? Where is this magic patch all I have seen is demos. Far Cry??? Come on Ubisoft can’t find their way out of a wet paper bag let alone release the patch. The last 2 I know nothing about nothing that interests me.

Quote:
where is the doom3 demo? hope it comes out before the retail game.

As far as I know there might not be a demo, they are targeting hype based sales; also rumor was the demo would be near 1.2 gigs in size, which is a bit much for their servers and clients to download.

Quote:
I'm not sure that SSE2 64bit viaribles transformation is faster than AMD64 computation on 64bit.

SSE2 in controlled or favorable code it can easily do 2x more instruction calls and data values of an A64's 64bit modules.

Quote:
SSE2 might only simplify programming but in the internal CPU structure it could "decompose" the transformation to make it fit in the CPU registers.

You are very correct SSE2 greatly stream lines code and system ticks to deal with complex math, unfortunately due to it's extremely specialized nature the general purpose registers for INT and FP and 100% incompatible with each other.

Quote:
First, I must that I love you!!!

I don’t know what to say...

Quote:
I understand that Microsoft used a big chunk of the Windows for Itanium code, but they compiled it for AMD64???

64bit code is 64bit code, toss it threw another compiler and watch out for vendor specific architectural optimizations.

Quote:
Don't be bugged, ignore PCI-E. The benefit isnt there (yet?), and by the time you'd want to upgrade your AGP card and not find one anymore, you'll want a new motherboard anyway.

SLI seems to show it off nicely.

Quote:
And if it needed it here or there, a 32 bit ALU could process those 64 bit INTs fast enough far anything but something extremely performance sensitive.

I would hate to see it personally the memory tag bits will be different for a 32bit call vs. a 64bit call. It’s not impossible; the processor would just carry bits or break up the instructions. In all my years I have yet to hear a native 32bit piece of software run correctly on a native 64bit processor.

Quote:
64 bit arithemtic is not the main advantage, at least not for games (encoding, encryption,.. otoh)

Why wouldn’t games benefit, static mesh, and model mesh prep would be greatly sped up with additional addressing space?

Quote:
Have you tried IL-2 Forgotten Battles ?

Have you played any MS air Sims, they run better and with less RAM usage, sloppy coded is the only real explanation... stupid programmers and bloat code.

Xeon

<font color=red>Post created with being a dickhead in mind.</font color=red>
<font color=white>For all emotional and slanderous statements contact THG for all law suits.</font color=white>
July 20, 2004 6:33:11 PM

>It also shows he has zero interest in 64bit capably

"it" ? If you are referring to an article or interview or something, please link.. I've been out of the loop.

>and as a market leader in engine design that goes a long
>way in my opinion

It does, if true. Still, since there is no problem running 32 bit games on 64 chips or OS's, I fail to see this as an argument against a 64 bit system. Especially when you see nice speedups running 32 bit games under 64 bit windows.

>Unreal??? Where is this magic patch all I have seen is
>demos

You'd expect Epic to sell you a game for which there is no OS yet ? And a patch ? The patch would be as big as the game, and my last Unreal game was 3 or 4 CD's, so I wouldnt count on a patch.

> Far Cry??? Come on Ubisoft can’t find their way out of a
>wet paper bag let alone release the patch

Same comment applies, only that Ubisoft has little to do with this, Crytek develops it. They've said they will support it once windows is there.

>The last 2 I know nothing about nothing that interests me.

AA= America's Army
OTOH= On The Other Hand :|

>SLI seems to show it off nicely.

Only on Alienware's $5000+ Xeon machine AFAIK. At least so far.

>Why wouldn’t games benefit, static mesh, and model mesh
>prep would be greatly sped up with additional addressing >space?

I'm not saying they can not benefit, the discussion was about 64 bit <b>arithmetic</b>, not address space. I don't see games doing a whole lot of 64 integer math.

> would hate to see it personally the memory tag bits will
>be different for a 32bit call vs. a 64bit call.

Again, <b>arithmetic</b>. ALU= Arithmetic and Logical Unit. There is no problem at all processing 64 bit ints on a 32 bit ALU, its just slower.

>In all my years I have yet to
>hear a native 32bit piece of software run correctly on a
>native 64bit processor.

I assume you meant the other way around, which is impossible when you are talking about 64 bit address space, and which is easy when you are talking about integer math. If you did mean what you wrote, well, I guess I don't know any 64 bit cpu except Alpha that can *not* correctly execute 32 bit core :) 

>Have you played any MS air Sims,

Yes, and its not even in the same league when it comes to flight model realism and physics. Its an arcade game compared to IL-2.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
July 20, 2004 6:41:30 PM

By the way, if you wanted to reply to P4Man, you missed it! :smile:

Quote:
Nice and stop telling people that they might think I respect you.

OK... I understand that Xeon don't like P4Man??? Am I wrong?

About 3D engine : Do you really think that only id Software deserves some respect in this domain? I'm not sure what is your opsition, I tried to understand your whole point, but it's a bit confusing in the torrent of quote/reply.

Quote:
SSE2 in controlled or favorable code it can easily do 2x more instruction calls and data values of an A64's 64bit modules.

So heavily optimised SSE2 path could be faster than heavily optimised AMD64 path. But, the SSE2 optimised apps. will still be 32bit. the best would be SSE2+AMD64 optimisations! This would improves the performances even more.

Quote:
SLI seems to show it off nicely.

The problem with SLI is that it's another thing that will only target the elite and avid gamer. nVidia will only offer this on their TOP products. They should push it to the entry-level too! This would be nice for people to know that when they will need more GPU power they could stick another GPU for a few bucks! This would hurt high-end cards sells, but this would boost sales of low-end cards.

But, I doubt they will ever do it, since every company wants to make more money and keeping SLI to 300$ + cards ensure sells of 600$, better than trying to sell a bunch of 100$ cards!

Quote:
Why wouldn’t games benefit, static mesh, and model mesh prep would be greatly sped up with additional addressing space?

I agree! And this could also benefits AI/Simulation to some extent too!

Quote:
Have you played any MS air Sims, they run better and with less RAM usage, sloppy coded is the only real explanation... stupid programmers and bloat code.[/quotet]
Stupid programmers = Halo ported on PC! What a piece of crap, FPS is so low for the IQ level... I would like to understand how a game that was running smoothly on the Xbox (P3-700 + GeForce3.5) hardware can't run well on an Athlon XP 2400+ with a Radeon 8500 @ 640x480...

--
It's tricky to use words like <b><font color=green>AMD</font color=green></b> or <b><font color=blue>Intel</font color=blue></b> in a signature some users could think your are biased.
July 20, 2004 6:54:56 PM

Quote:
Especially when you see nice speedups running 32 bit games under 64 bit windows.

Give WoW credit for that not the 64bit aspect.

Quote:
The patch would be as big as the game, and my last Unreal game was 3 or 4 CD's, so I wouldnt count on a patch.

Sweeny said they had one programmer do it in about a week or month I don’t remember now. So no it wouldn’t be a massive patch.

Quote:
Same comment applies, only that Ubisoft has little to do with this, Crytek develops it.

Ubisoft is their publisher and rest assured they apply the pressure on Crytek hence the delay, so as far as I am concerned it's Ubisoft's fault.

Quote:
Only on Alienware's $5000+ Xeon machine AFAIK. At least so far.

Removes doubt that PCI-E is useless technology though since it does carry some serious performance when used.

Quote:
I'm not saying they can not benefit, the discussion was about 64 bit arithmetic, not address space. I don't see games doing a whole lot of 64 integer math.

They are one in the same as far as I am concerned since 64bit arithmetic can have a larger value which helps in heavy FP and even INT if the code is complex enough. Since there will not be a need to shift bits around to complete the math. Point being is 64bit arithmetic will utilize the address space to tap into its strength.

Quote:
Again, arithmetic. ALU= Arithmetic and Logical Unit. There is no problem at all processing 64 bit ints on a 32 bit ALU, its just slower.

You can’t be serious, every single data variable that goes through there has to have a memory tag bits. If not that machine won’t know where to put it back too. They are different in terms of 32bit and 64bit calculations.

Also in case you missed my entire point, the increase in performance in certain apps that we are seeing for 32bit environment to a 64bit environment is because of the great work that MS has done on WoW.

Quote:
Yes, and its not even in the same league when it comes to flight model realism and physics. Its an arcade game compared to IL-2.

Difference in opinions then :smile: .

Xeon

<font color=red>Post created with being a dickhead in mind.</font color=red>
<font color=white>For all emotional and slanderous statements contact THG for all law suits.</font color=white>
July 20, 2004 7:05:42 PM

Quote:
OK... I understand that Xeon don't like P4Man??? Am I wrong?

I actually enjoy having debates/arguments with him, he keeps me sharp.

Quote:
Do you really think that only id Software deserves some respect in this domain?

It's a bit complicated but I have a great deal of respect for the John Carmack and the company as a whole. I tend to have tunnel vision in regards to looking at other programmers and companies as competent competition.

Quote:
So heavily optimised SSE2 path could be faster than heavily optimised AMD64 path. But, the SSE2 optimised apps. will still be 32bit. the best would be SSE2+AMD64 optimisations! This would improves the performances even more.

They are 128bit registers they can chew at 32bit and 64bit calls and variables without a problem. But in the case of the A64 it can increase performance by why bother with the complexity of SSE2 optimizations when you can just code for the 64bit GPR's(I like that acronym I have never used it before).

Quote:
The problem with SLI is that it's another thing that will only target the elite and avid gamer. nVidia will only offer this on their TOP products. They should push it to the entry-level too! This would be nice for people to know that when they will need more GPU power they could stick another GPU for a few bucks! This would hurt high-end cards sells, but this would boost sales of low-end cards.

True but in time it will trickle down the tax brackets.

Quote:
I agree! And this could also benefits AI/Simulation to some extent too!

Damn forgot about that too AI branches could carry larger variables allowing for more branches.

Quote:
Stupid programmers = Halo ported on PC! What a piece of crap, FPS is so low for the IQ level... I would like to understand how a game that was running smoothly on the Xbox (P3-700 + GeForce3.5) hardware can't run well on an Athlon XP 2400+ with a Radeon 8500 @ 640x480...

Yes that’s quite true, but the porting issue was more account to a back buffer that the Xbox GPU uses that standard GPU's do not use, hindering performance by quite a bit.

Xeon

<font color=red>Post created with being a dickhead in mind.</font color=red>
<font color=white>For all emotional and slanderous statements contact THG for all law suits.</font color=white>
July 20, 2004 7:15:10 PM

>Give WoW credit for that not the 64bit aspect

I should credit an compatibility emulation layer for a speedup ? Hmmm.. interesting. Maybe MS should release a 32 bit WoW32 layer to speedup those games on 32 bit cpu's, you think that would work ?

>Sweeny said they had one programmer do it in about a week
>or month I don’t remember now. So no it wouldn’t be a
>massive patch.

Ahem.. I think that dude recompiled the project after that week or month, so I think he ended up with a few gigabyte worth of binaries. Since AFAIK they don't give you the source, I think the patch would be frigging huge if it where possible. Sounds like patching UT into Doom3 or something, I'm sure some bits are still the same ;) 

>Removes doubt that PCI-E is useless technology though since
>it does carry some serious performance when used.

Doesnt help anyone buying a PCI-E board today with just one GPU slot. Hence my comment, ignore it for now.

>They are one in the same as far as I am concerned

If you can not distinguish between address space and arithmetic, I am not sure I have anything left to discuss with you. Next thing you will say bus width matters too because those 64 bit long integers must be read from memory.

>You can’t be serious, every single data variable that goes
>through there has to have a memory tag bits. If not that
>machine won’t know where to put it back too. They are
>different in terms of 32bit and 64bit calculations.

May I humbly suggest you do some reading on the subject ?
These are two highly recommended articles on Ars Technica:
<A HREF="http://arstechnica.com/paedia/c/cpu/part-1/cpu1-1.html" target="_new"> Understanding the Microprocessor</A>

<A HREF="http://arstechnica.com/cpu/03q1/x86-64/x86-64-1.html" target="_new">An Introduction to 64-bit Computing and x86-64</A>

>Also in case you missed my entire point, the increase in
>performance in certain apps that we are seeing for 32bit
>environment to a 64bit environment is because of the great
>work that MS has done on WoW.

That is pretty hilarious really.. No matter how good WoW, WoW just translates 32 bit windows API calls into 64 bit ones, nothing more, nothing less. Its an added overhead, not a performance enabler. The increased performance you see is because any app just spends quite a bit of time in the Windows API's (or directX, drivers,..), which executes faster on a AMD64 machine in 64 mode, period. If you want to credit WoW64, credit it for not costing even more performance than it does.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
July 20, 2004 7:47:59 PM

Quote:
It's a bit complicated but I have a great deal of respect for the John Carmack and the company as a whole. I tend to have tunnel vision in regards to looking at other programmers and companies as competent competition.

OK, I understand your point! For me, I really like what Blizzard is doing (and Rob Pardo works as lead designer). Blizzard are impressive not by their code/engine programming, but they impressed me, because they are so devoted to their products and Franchise. They keep it alive and fun. All their titles are still patched/updated/balanced/tweaked even if they are very old (StarCraft was launch in 1997).

And their games are always attractive to the mass and the elite. This is quite an achievement for a game Studios.

I also admire id Software, but I don't like the Quake style gameplay. I wish oom 3 will bring back the Doom like experience. In terms of 3d engine quality, I mostly enjoyed it through MOHAA and RTCW. But, I must admit that the Source Engine seems to have a lot of potential too!

Is there a place where we can see who will use Doom3/Source/Unreal engine in their upcoming titles?

--
It's tricky to use words like <b><font color=green>AMD</font color=green></b> or <b><font color=blue>Intel</font color=blue></b> in a signature some users could think your are biased.
July 20, 2004 8:08:19 PM

Quote:
I should credit an compatibility emulation layer for a speedup ?

When its changing memory tag bits so the code just goes on through it sure does.

Quote:
Ahem.. I think that dude recompiled the project after that week or month, so I think he ended up with a few gigabyte worth of binaries. Since AFAIK they don't give you the source, I think the patch would be frigging huge if it where possible. Sounds like patching UT into Doom3 or something, I'm sure some bits are still the same ;) 

<A HREF="http://www.firingsquad.com/features/sweeney_interview/d..." target="_new">Here</A>.

Quote:
Firing Squad: Can you describe the process involved in migrating to AMD's 64-bit architecture? Has the transition been a difficult one?

Since our code is pure C++ and already ran on 32-bit Windows and Linux, the only work required was to make the code 64-bit safe. No Hammer-specific work was necessary to get the port up and running; what we did for Hammer is the same thing that would be needed to run on 64-bit PowerPC or 64-bit Itanium.

In the case of the Unreal code base, about 99.9% of the code was already 64-bit safe and didn't need touching. Of course, with a million-line code base, the remaining 0.1% left a hundred or so places in the code that needed updating because of assumptions we made years ago before we'd thought about 64-bit. It was a relatively straightforward process, and took Ryan Gordon about 10 days of hard work.

Quote:
Doesnt help anyone buying a PCI-E board today with just one GPU slot. Hence my comment, ignore it for now.

OK let’s leave it at that then.

Quote:
Next thing you will say bus width matters too because those 64 bit long integers must be read from memory.

If it can’t get its 2 variables since 64bit CPU's have 128bit FSB's. What’s the point in utilizing 64bit calls and data values if you cant get at least 2 down the pipeline to work with, waste of cycles if you ask me.

Quote:
If you can not distinguish between address space and arithmetic

If you are running 64bit arithmetic you’ll be taking its address space along for the ride, it's a given I would have to think.

Quote:
May I humbly suggest you do some reading on the subject ?
These are two highly recommended articles on Ars Technica

In code it looks that way but in machine language it is far different that’s my point.

Quote:
That is pretty hilarious really.. No matter how good WoW, WoW just translates 32 bit windows API calls into 64 bit ones, nothing more, nothing less. Its an added overhead, not a performance enabler. The increased performance you see is because any app just spends quite a bit of time in the Windows API's (or directX, drivers,..), which executes faster on a AMD64 machine in 64 mode, period. If you want to credit WoW64, credit it for not costing even more performance than it does.

So you cut the A64's registers down to the 64bit(that’s 16 I do believe) ones since its only running in 64bit and those 16 registers are somehow faster then the using the other GPR's? Because that’s how it reads to me, since last I checked WoW takes 32bit instructions and data values and converts them to 64bit compatible ones.

Also if it spends more time being run in API's, drivers and direct X wouldn’t that add even more overhead since they are designed to (at least drivers) create binaries for their various hardware counterparts.

Xeon

<font color=red>Post created with being a dickhead in mind.</font color=red>
<font color=white>For all emotional and slanderous statements contact THG for all law suits.</font color=white>
July 20, 2004 8:38:48 PM

first of all, i would like to point out that no one here has ever said PCI-E is useless, just that it isnt a sleling point right now. The nforce 4 this fall will bring dual pci-e slots to support thier SLI tech, so by then it will be mor ein the range of the average gamer/high end user instead of having to spend 5k+ on a xeon machine....

if your going to buy into pcie this year, and your after performance, wait for the nforce 4 and get dual nvidia cards.


you know i hear all of this discussion about AMD64, lets through this at EM64T, although i know it simpossible since intel has decided to make it invisible to the public lol, but where is all the testing on nocona? no one can say that silence was normal or that it is a good sign lol.

i also dont think you can say that WoW is the only reason that 32bit apps run better on the 64bit edition, give amd a bit of credit here. was it WoW that gave a boost to 16bit apps whne microsoft went 32bit? and who knows, for all we know longhorn could come as a 64bit version only, since its slated for 2006 or 2007.

i did want to mention about developers. i like blizzard alot to, ive beena fan of thier stuff for a long time, along with cyan for thier myst series. now that ubisoft is running things, i am glad to see they have acieved alot with URU and Myst 4. now this is all pc devlopers mind you, on consoles its a whole differnt story. lucas arts, ubi soft, nintendo, just to name a few lol stand out there.
July 20, 2004 9:00:30 PM

>When its changing memory tag bits so the code just goes on
>through it sure does.

?? You don't have a clue what you are talking about. WoW translates windows API calls, it has nothing to do with "memory tag bits". If you don't know what an API is, look it up, do some reading, educate yourself.

>In the case of the Unreal code base, about 99.9% of the
>code was already 64-bit safe and didn't need touching.

No, the source code may have been nearly identical but *all of it* needed to be <b>recompiled</b>, probably using a different compiler even, ensuring virtually not a single bit of the resulting binary would be identical to the old 32 bit version (just the textures, maps and data files). Hence, no patch IMO.

>If it can’t get its 2 variables since 64bit CPU's have
>128bit FSB's. What’s the point in utilizing 64bit calls and
>data values if you cant get at least 2 down the pipeline to >work with, waste of cycles if you ask me.

Oh dear.. the sarcasm escaped him :( 

> you are running 64bit arithmetic you’ll be taking its
>address space along for the ride, it's a given I would have
>to think.

Don't think too much, read. I could do 64 bit arithmitics on my 8 bit TRS-80 (Zilog Z80 powered).

>In code it looks that way but in machine language it is far
>different that’s my point.

Do the reading, you have no point.

>So you cut the A64's registers down to the 64bit(that’s 16
>I do believe) ones since its only running in 64bit and
>those 16 registers are somehow faster then the using the
>other GPR's?

Hu ?

> since last I checked WoW takes 32bit instructions and data
>values and converts them to 64bit compatible ones.

Then its obvious you never checked, because WoW doesnt do *anything* like that.

>Also if it spends more time being run in API's, drivers and
>direct X wouldn’t that add even more overhead since they
>are designed to (at least drivers) create binaries for
>their various hardware counterparts.

Hu ? You mean you don't need DirectX or driver on a 32 bit system or what ? No there is no additional overhead, a DirectX game will call the DirectX API's, and directX code is exectuted. That is 32 bit code on current windows, and native 64 bit code on XP64. The only overhead is the API calls that need to be intercepted and translated by WoW (and the switching the cpu between states, etc, but that is just a few irrelevant cycles between perhaps every few million instructions).

Seriously, I (again) suggest you read the articles I linked to, I think they will help you better understand what you so like to discuss.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
July 21, 2004 1:24:10 AM

Quote:
first of all, i would like to point out that no one here has ever said PCI-E is useless

I said we would leave it at that, no need to rehash.

Quote:
i also dont think you can say that WoW is the only reason that 32bit apps run better on the 64bit edition, give amd a bit of credit here.

OK AMD made a fantastic piece of work, when they brought the Athlon 64 to reality. I also didn’t mean that WoW does all the work; it's me being to pro Intel. Those extra registers are a great addition as well as SSE2 capability.

Quote:
No, the source code may have been nearly identical but *all of it* needed to be recompiled, probably using a different compiler even, ensuring virtually not a single bit of the resulting binary would be identical to the old 32 bit version (just the textures, maps and data files). Hence, no patch IMO.

Your right that’s your opinion so lets leave at that and see if they can release a patch for the game or will they need to make a entirely separate build.

Quote:
Oh dear.. the sarcasm escaped him :( 

Ditto I am not sure what the hell I meant there... tired.

Quote:
Don't think too much, read. I could do 64 bit arithmitics on my 8 bit TRS-80 (Zilog Z80 powered).

Break it down into smaller data sizes? Interesting never thought of that, but wouldn’t that be inefficient?

Unless of course I am missing your point, since there must be your defending it :smile: .

Quote:
Do the reading, you have no point.

Oh I do but I don't seem to be conveying it to you correctly.

Quote:
Hu ?

That’s me trying to figure out what you are saying, don’t be bothered by it, just me thinking out loud.

Quote:
Then its obvious you never checked, because WoW doesnt do *anything* like that.

The technology allows for 32-bit applications using the IA32 instruction set to be executed on Windows XP 64-bit Edition?

That sounds to me like its taking 32bit code and converting it to 64bit code and vice versa. Also WoW is unable to execute 16bit code, which really makes it sound something like JIT compiler you mentioned.

Quote:
You mean you don't need DirectX or driver on a 32 bit system or what ? No there is no additional overhead, a DirectX game will call the DirectX API's, and directX code is exectuted.

But if its 32bit, the software would have 32bit Direct X code, since it would be running on a 64bit Direct X, so why wouldn’t there be any overhead from the API layer? In fact Direct X has to interpret engine calls to the hardware, so it would always have overhead.

Quote:
Seriously, I (again) suggest you read the articles I linked to, I think they will help you better understand what you so like to discuss.

I’ve seen the material before it's nothing new but thx anyways.

Xeon

<font color=red>Post created with being a dickhead in mind.</font color=red>
<font color=white>For all emotional and slanderous statements contact THG for all law suits.</font color=white>
July 21, 2004 2:25:12 AM

Quote:
<i>Originally posted by <b>Xeon</b></i>
64bit code is 64bit code, toss it threw another compiler and watch out for vendor specific architectural optimizations.

Watch out for endianness as well. x86 (and AMD64) are little-endian (least-significant-byte first in memory). IA64 can go either way.

Quote:
Sweeny said they had one programmer do it in about a week or month I don't remember now. So no it wouldn't be a massive patch.

The recompile dramatically alters the binary executables. Obviously the map/model data wouldn't change though (or shouldn't, if it's properly platform-independent), and I strongly suspect those data files consume the vast majority of those three or four CDs.

Quote:
<i>Originally posted by <b>P4man</b></i>
Not true. In fact, 32 bit apps *do* need an emulation layer (64 bit WoW) to exectute. The cpu doesnt require any emulation, but windows does if you want it to execute 32 bit code and API's. I'm not sure how Linux or other OS's handle this, but I assume its the same. I remember 64 bit SuSE (or another distro, don't recall) not supporting 32 bit apps yet.

You're thinking of a thunking layer. It translates API calls from 32-bit binaries to something that can go through 64-bit APIs.

I can tell you now, Linux does NOT do thunking layers (at least not of this sort). That's a binary-backwards-compatibility tech that the kernel developers generally don't care for, because it's a bit pointless when most of your apps can be recompiled anyways.

That means if you have an IA32 Linux app running on AMD64 Linux, you'll need IA32-compiled versions of every library that app requires. And of course the AMD64-compiled apps will need AMD64-compiled libraries.

<i>"Intel's ICH6R SouthBridge, now featuring RAID -1"

"RAID-minus-one?"

"Yeah. You have two hard drives, neither of which can actually boot."</i>
July 21, 2004 7:15:05 AM

>I can tell you now, Linux does NOT do thunking layers (at
>least not of this sort). That's a
>binary-backwards-compatibility tech that the kernel
>developers generally don't care for, because it's a bit
>pointless when most of your apps can be recompiled anyways.
>
>That means if you have an IA32 Linux app running on AMD64
>Linux, you'll need IA32-compiled versions of every library
>that app requires. And of course the AMD64-compiled apps
>will need AMD64-compiled libraries.

Really ? So how does this work then in practice ? Say I have a (compiled) 32 bit game I want to run under 64 bit Linux, the game makes calls to X-windows, OpenGL API, sound driver, etc,.. the game just won't run then ? Sorry if this is a silly question, I know next to nothing on Linux.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
July 22, 2004 2:30:48 AM

Quote:
Watch out for endianness as well. x86 (and AMD64) are little-endian (least-significant-byte first in memory). IA64 can go either way.

Didn't know that thx dude nice to see you posting again as well.

Xeon

<font color=red>Post created with being a dickhead in mind.</font color=red>
<font color=white>For all emotional and slanderous statements contact THG for all law suits.</font color=white>
July 22, 2004 2:46:41 AM

Quote:
Really ? So how does this work then in practice ? Say I have a (compiled) 32 bit game I want to run under 64 bit Linux, the game makes calls to X-windows, OpenGL API, sound driver, etc,.. the game just won't run then ? Sorry if this is a silly question, I know next to nothing on Linux.

Well, you'd have to have 32-bit variants of your X-Windows/OpenGL/sound libraries installed for the game to run. It's not too difficult to have these installed alongside corresponding 64-bit libraries--in mixed 32/64-bit Linux installations like SuSE for AMD64, it's common to put 64-bit libraries in /usr/lib64 or the like while reserving /usr/lib for 32-bit libraries. That way the libraries don't stomp on each other (even with identical filenames), and the run-time linker can sort things out from there.

<i>"Intel's ICH6R SouthBridge, now featuring RAID -1"

"RAID-minus-one?"

"Yeah. You have two hard drives, neither of which can actually boot."</i>
!