Sorry about not being able to write a decent thread description. I have 2 GB RAM on my Windows 7, 64-bit OS. I am consistently running several apps. at a time and I usually have in excess of at least 700-900 MB of memory left. I have heard that more memory makes the computer's access and delivery time shorter. Why or would you agree? My understanding has been that the less RAM that is being used, the faster it accesses and delivers the data, because it takes less time to look through a small amount of data to find what you need than look through a large amount of data that would take longer to access because there is more data to look through. I would imagine the same is true for the hard drive. So if you had only 1 GB of RAM space and it stored all of the data the RAM could hold (1GB data), it would take the same amount of time to access and deliver data with 2 GB RAM space and again only 1 GB of data being stored in that RAM, of course considering that the data being accessed is the same amount with 1GB memory space as with 2GB memory space, and all other parameters being the same also.
It takes microseconds to find something in ram. It takes milliseconds to load something from a hard drive. The more ram you have, the more things the OS can keep in ram in anticipation of reuse. Some smarter os, like W7 can even anticipate your use and preload it for you. Table lookups are found via hashing techniques, not sequential searches. The size of the table is largely insignificant.
Thx geofelt, but what do you mean by table lookups?
In programming, how do you find an item in a table(list) of items?
Simple method: Start at the beginning. Is this the item? if yes, it is found. If no, look at the next item in the table. Repeat until found. The bigger the list of items, the longer it takes to find it.
If the item If it is not found, you search the entire table, taking maximum time.
Better method: If the table is sorted in order, you can quit if the next item is higher. Better yet, start in the middle. Eliminate half the list each cycle.
Even better, hashing, where a computation is made of the item, and a check is made to see if a array of items arranged by the same method is there.
Simple example; how to find the name of a month, given the number of the month: If it is month 3, look in the table and extract the 3rd entry which says "March".
All of this is programming, which takes just microseconds and is irrelevant, unless the list is very large.
One really has no way on knowing how much ram is "left", or even used. A "working set" number is perhaps useful. Working set" is how much ram is needed for the application to do work without excessive hard faults which must be resolved by I/O to the page file.
XP was very active moving data to and from the page file because ram was a precious commodity. With the availability of large amounts of cheap ram windows-7 does much less management, preferring to keep more in ram in anticipation of reuse.
I can think of no reason why more ram would hurt performance in a windows-7 64 bit system of today.
You don't know when you have too much ram, but it will be real obvious when you don't have enough- you open one app and work with it, then you open a second app and work with it. When you try to switch back to the first app it should be instantaneous. If it takes a half second or more, then you don't have enough ram and the OS is having to write parts of the first app to the swap file on the hard drive to make room for the second app to run. When you get ready to access the first app, then the OS writes the second app to the swap file on the hard drive and loads the first app back into memory.
My understanding has been that the less RAM that is being used, the faster it accesses and delivers the data, because it takes less time to look through a small amount of data to find what you need than look through a large amount of data that would take longer to access because there is more data to look through.
That's a misconception a program doesn't search for data in memory, because it already knows where it is and retrieving something from RAM is lightning fast.
If you've been in a water park (or something similar) before, they have these big rooms with tons of lockers and once you store your street clothes in one of them you get a colored wristband with a key and a number printed on it. When you come back you simply look for the row with blue lockers and look for number 14 instead of running mindlessly through the whole room and trying the key in every lock. It's a bit more complicated but it basically works like that.
What geofelt means with 'table lookups' is something that happens within a program and has nothing to do with accessing memory. That is just about how smart the programmer was and how he organizes big amounts of data. The analogy to that is one person that has everything neatly organized with file cards and another one, who just has a ton of cards scattered over his desk. The first person will find a specific card always at the same time, while the second person is only faster if he gets lucky or has less cards to look at.
I would imagine the same is true for the hard drive.
A hard drive works like a group of books. When a program requests a specific file, the system looks at the index to see in which book and on what page the file is. Then you only have to pick up the book and flip through the pages until you reach the right one.
That's why HDD access takes longer, because it needs to 'flip pages'. It gets even worse if the drive suffers from fragmentation, in that case you have to jump back and forth between pages and even books to retrieve every part of the file. Which takes a lot longer, because you can only look at one book and one page at the same time (while looking for only one file).
What makes HDD accesss so much slower is that the disks have to spin to that point. But HDD size doesn't really affect access times either, as manufacturers 'cheat' by changing the structure of the disks and the way they're accessed.
Conclusion: RAM size doesn't affect access times, because it already is lightning fast and it would always be worse to swap stuff back&forth on the HDD.
HDD size doesn't affect access times, because manufacturers 'cheat' and try to keep average access times as stable as possible regardless of disk size.