G
Guest
Guest
Got a question about 64 bit.
I seem to recall P3/P4/Tbirds (?) can theoretically address up to 64 GB of RAM (36 bit adressing lines I think). However, due to the fact that they use 32 bit only registers, they can only address 4 Gb at once, ie, a single process can only be 4 Gb. Is this correct ?
Regardless of wether it is data or code ? Is was just thinking about good old EMS/EMM (?) on a 286.. where you could only have 640 Kb of code, but by using a special translation scheme, your application could access a few megabytes of data.. but data only. Anything beyond 640 Kb/ 1Mb could not be used to run code. Is this thing similar ?
My point is.. I dont think you will require very often to have more than 4 gb available for a process/application.. but you might want much more to store data in. Is that possible ? could you have like a 10 Mb application accessing 10+Gb of data in memory on current cpu's ?
If you can, I fail to see the point of 64 bit cpu's.. Have you got any idea how much code a compiled 4 Gb process is ?? I dont know how compiled code compares to source code, but lets say about 1/20 (please correct me, Im no developper). If I'm right, that would mean 80 gb of sourcecode. that is 80.000 mb or about 80.000.000.000 bytes. Lets say an average line of code is what, 100 bytes (?), thats 800.000.000 lines. It would take a team of 100 developpers maybe 10 years to just *type* that. Am I missing something ?
What do we need 64 bit cpu's for then ? Dont we just need 64 bit adressing ?
---- Owner of the only Dell computer with a AMD chip
I seem to recall P3/P4/Tbirds (?) can theoretically address up to 64 GB of RAM (36 bit adressing lines I think). However, due to the fact that they use 32 bit only registers, they can only address 4 Gb at once, ie, a single process can only be 4 Gb. Is this correct ?
Regardless of wether it is data or code ? Is was just thinking about good old EMS/EMM (?) on a 286.. where you could only have 640 Kb of code, but by using a special translation scheme, your application could access a few megabytes of data.. but data only. Anything beyond 640 Kb/ 1Mb could not be used to run code. Is this thing similar ?
My point is.. I dont think you will require very often to have more than 4 gb available for a process/application.. but you might want much more to store data in. Is that possible ? could you have like a 10 Mb application accessing 10+Gb of data in memory on current cpu's ?
If you can, I fail to see the point of 64 bit cpu's.. Have you got any idea how much code a compiled 4 Gb process is ?? I dont know how compiled code compares to source code, but lets say about 1/20 (please correct me, Im no developper). If I'm right, that would mean 80 gb of sourcecode. that is 80.000 mb or about 80.000.000.000 bytes. Lets say an average line of code is what, 100 bytes (?), thats 800.000.000 lines. It would take a team of 100 developpers maybe 10 years to just *type* that. Am I missing something ?
What do we need 64 bit cpu's for then ? Dont we just need 64 bit adressing ?
---- Owner of the only Dell computer with a AMD chip