Ripped from a previous post of mine:
--Snip---
Problems with 4 GiBs of system memory in PC's
Problem 1) The "Memory Hole" problem: because of legacy reasons, all device interface maps reside at the top of the 4GiB address space (the idea being allowing memory mapping of devices in standard memory space, start at the end (which, at the time was 2^32 bits, or 4GiB, and work backwards). This requires mapping of the memory that would be unaccessible due to overlaping these device maps to the address space above 4GiB. This can be accomplished in either software (via BIOS + OS awareness) or in hardware (via BIOS and hardware ability, only available in the 90nm chip revision
Problem 2) The "Hey, where'd my other 2 gigs go?" problem: Windows XP 32-bit (Home only comes as 32-bit) is able to address 32-bits worth of address space (without ugly hacks like PAE, but that's for another thread) which, as said before, is 4GiB. We're money, right? Unfortunately not, as Windows splits the address space and hence the memory into "kernel memory" and "user memory", and it splits this 4GiB address space right down the middle, leaving processes only able to access, at most, 2GiB of space. While there is no solution to make Windows let any process use all of the possible address space short of adding more address bits, there is a boot parameter you can set in boot.ini that may help the situation, /3GB, when appended to the boot params can allow processes the ability to use up to 3GiB of address space at the cost of the kernel only having 1 GiB. As such, I've seen this approach work wonders and I've seen this approach crash systems, requiring fine-tuning of the address split using the USERVA=##### switch to back off from 3GiB. Even after all of that, note that only certain applications that have been compiled with special flags will even be aware of the possiblity to use more that 2 GiB of memory, and most of those are in the scientific and engineering realm.
Problem 3) The "Where's the performance?" problem: As others have touched on here, save high-end video editing, veryhigh end photo editing, mathematic simulations on large data sets, running a mid level to entry-enterprise level (web|DB|domain)server, or any other application where routine access to large, large ammounts of data is the order of the day, you will see little to no improvement over 2GiB. There are many articles around (including one on THG) that illustrate this.
Problem 4) The "Windows x64 kinda sucks" problem: As stated before, the only "real" way to see all those pretty bytes is to get a system that has the kinda available address space to house that memory and the mapped devices in one go which requires an operating system that has a few extra address bits in it's addresses. What you will quickly realize is that some of your old hardware won't work anymore and, guess what? it never will. Many companies just aren't supporting 64-bit drivers and, while the situation is much improved from "the early days", it still is a problem. Even beyond that, you will have to install many 32-bit programs as there aren't 64-bit verions of them, and when there are (IE and Firefox), some plugins are not (Flash, Java plugin), which requires the 32 bit browser. Linux and BSD is much better in the driver realm, but that point is probably moot here. Thus is life on the bleeding edge.
Finally, I mean you could always upgrade later, I mean you can still buy pc133.
--Snip--