Closed

Page file with SSD, indexing

Hello again! I have two questions that I'm wondering about, and I need some insight from you smart people!

Question 1: Page file location and size

I have as my main drive an 80GB SSD and also have 3 standard HDD's for storage. 1 for music, documents, etc. 1 for movies, and 1 for TV recording with media center. My OS is Windows 7 ultimate 64 bit with 4GB of DDR2 PC2-6400 RAM . As it currently stands, I have one single page file set to 6GB. I recall the rule of thumb as using 1.5x your RAM for page file size? Is this correct/OK?

I have that single page file located on my SSD along with the operating system. I figure this will yield the best performance due to the read/write speeds of the SSD. But I am also wondering if this is more beneficial than locating the page file one of the other standard HDD's that is not used very often.

So which of these scenarios would be the better option:
- page file on same drive as OS with the benefit of SSD speeds
- page file on a different drive than the OS but with the lower speeds of a standard 7400RPM HDD

Also, if locating the page file off of the OS drive is the better option, keep it one one single HDD or split it up between the three HDD's?

Question 2: Windows indexing... on or off?

I'm wondering which scenario is better for making my PC more responsive/faster/etc. What I am not sure of is whether it is better to have indexing on so the system can find things faster, of if it is better to have it off so that the system isn't constantly getting slow downs from windows indexing folders/directories that I'm opening up and going through. It seems like every time I open a folder with a lot of files in it (like my movie folder that has like 700GB+ in it) there is always a long pause while windows indexes all the files in it (the green "meter" in the address bar at the top of the window) Is it supposed to do this every time? I thought the point of indexing was to make it so it DOESN'T have top scan the contents of a folder every time you open it up. Is indexing ONLY used to make searches faster? Like as in Start > Search? Or does it keep track of things and make navigating your folders faster? Because if it's just to make normal searches faster, I can definitely do without it as I never use the search feature anyway.

So anyway... that's it. Any insight you can provide is appreciated!

EDIT:

Hmmm... after doing some reading/research, it would seem that a page file on the SSD would be a bad idea even if it is beneficial to performance in the short term... because of the constant reads/writes it shortens the SSD's life span?

If this is so, it would definitely be better to move it over to one or more of the stantard secondary HDD's.

In that case:

- How much total page file should I use? Space isn't an issue, my HDD's hold 2.5TB in total.

- Is it better to have the PF on one single drive or split it up between multiple drives?

- Is it OK to have absolutely NO page file on my OS disk, or should I leave a very small one there... like 256MB or so? I have heard that this is necessary in the case of crashes for error reports, dump files, etc.
52 answers Last reply
More about page file indexing
  1. Yep, pagefile on SSD will shorten its life.

    If you cpu is a fast quad and you have lots of files, shouldn't notice indexing.

    6GB should be more than enough for 4GB ram, never set pagefile to zero, if you have 8GB or more, set it to a static 1GB.
  2. pagefile as users know it should be disabled on a SSD. HDD's have slower transfer rates so when you are low on physical RAM the OS uses a page file.... SSD's are obviously much quicker so even if you tun out of RAM the SSD transfer rate will be more than quick enough.

    Also never ever defrag a SSD
  3. ulysses35 said:
    Also never ever defrag a SSD

    About this... I opened up the defrag utility, to make sure defrag was disabled on the SSD but windows is not even showing the SSD on the list of drives for the scheduled defrag. Does Windows 7 automatically exclude SSD drives from the defrag utility?

    Defrag list/schedule: http://i46.tinypic.com/10eljyb.png
  4. Yes it does. About the pagefile, I have 6gb ram and have no page file at all, the only problem I had with it, is Titan Quest refusing to start without it. Never ran out of ram so far.
  5. Eh... true I rarely if ever use up all 4GB of my RAM. But I set a 1GB page file on each of the 3 HDD's anyway. It doesn't really hurt anything to have more than I need, and the 3GB is only like 0.2% of my storage space so it doesn't hurt me any.

    Left a 200MB page file on the SSD as when I tried to remove it Windows told me it was a bad ideal lol.
  6. Imo, you really only need a 512MB pagefile (assuming you aren't running CAD or vid/photo editing) even on a system with 4GB RAM.

    And DO remove that 200MB Pagefile from the SSD. Don't listen to what the OS tells you.
  7. You want to minimize read/write operations on the SSD. Whether you have 200MB or 1GB or 2GB, a read/write operation is a read/write operation.
  8. First - I always locate the page file on a 2nd drive when possible.

    For SSDs: From a performance perspecitve, There is little difference. The biggest performace benifit is in the Random writes followed by random reads. The approx ratio of reads to writes is 40 to 1. So little loss on using the HDD for the Page file, But by keeping it off theSSD help to minimize the performance degragation of the SSD. Bear in mind that the few SSD that support Win 7 Trim Cmd (Intel G2, Patriot Torqx, and OCZ Vortex (may be a few more) have a much reduced performance hit in this area.

    I have a Intel 80 Gig G2, with the page file set to a HDD. Win 7 WEI = 7.8 rigth after installing OP, after installing some programs and using for a week - still 7.8. (WIE not the Best Bench mark, But have not run crystaldisk mark on Intel.

    Also have a 128 gig Torqx in a laptop. Single drive system (4 gigs Ram) with 1 gig Page file located on SSD. No Noticable degragation after 2 Months of usage. Had win 7 RC on it then upgraded to the RTM version. (Used CrytalDiskmark on it)

    Added - Do not need to minimize READs, only Writes.

    As to size of Page file - When on the HDD - no biggy. If you want, try 1 Gig and if you do not get any "out Of Memory, or Low memory warning your all set - you can always increase it. One Point, set min / max to the same size
  9. Bah humbug. More SSD paranoia. You will not kill your SSD with a page file. It will easily survive the warranty period and likely beyond that. Benchmarking kills SSDs. That said, you will not benefit a great deal from the page file being on the SSD unless you constantly run low on RAM and use the page file excessively in a short period.

    @RetiredChief: Where you expecting degradation in just 2 months? I expect my hardware to at least last the warranty period unless I abuse it.
  10. Disable page file unless You have some software that refuses to work without it.
    Windows is paging OS and will keep paging even if You disable pagefile but will use random space on HDD instead of dedicated pagefile for paging. This would be bad on HDD because of slow random seek speed on HDD. On SSD seek time is very fast so it does not matter. The benefit will be that it will randomly write to different locations on SSD an not on the same space spreading the cell wear across the SSD instead of writing to the same cells all the time wearing them out.
    If You do need pagefile for some software disable pagefile, delete it and create new regulary so that it is allocated to different part of SSD.
  11. ainarssems said:
    The benefit will be that it will randomly write to different locations on SSD an not on the same space spreading the cell wear across the SSD instead of writing to the same cells all the time wearing them out.

    This is not accurate. Windows does not specify where to write data on the SSD, the SSD controller does this. Windows has no knowledge of the physical location of any data or even if it is moved by the controller, Windows only knows the data's Logical Block Address. The controller maps the LBA with the physical NAND location. The controller will automatically spread data out over the drive to ensure only a small difference exists between the block with the most erases and the block with the least. For this very reason there is technically no such thing as a sequential read or write on an SSD.
  12. randomizer said:
    This is not accurate. Windows does not specify where to write data on the SSD, the SSD controller does this. Windows has no knowledge of the physical location of any data or even if it is moved by the controller, Windows only knows the data's Logical Block Address. The controller maps the LBA with the physical NAND location. The controller will automatically spread data out over the drive to ensure only a small difference exists between the block with the most erases and the block with the least. For this very reason there is technically no such thing as a sequential read or write on an SSD.


    I was considering it might be like that because of wear leveling on SSD's, but I thought it only works when writing new files and that it does not actually move data around when the data on the same file changes and file size does not increase like it is on pagefile.

    The basis for my thoughts is that while defragmenting on SSD's is considered not required and to be left disabled to reduce write cycles and prolong life tests show that defragmenting on SSD's does increase performance.
  13. When data on the same file changes, the controller will not overwrite the same blocks. Instead, it will try to write to a fresh block with no data. If none are available, it will have to cache the data in a partially filled block, erase the block, then re-program it with both the old data and the new data to be written. Once this occurs the map between the LBA and the physical NAND location is updated to ensure that when the data is read again that the controller access the correct blocks.

    Data is only moved during idle periods or when absolutely necessary, such as when all free blocks are gone and some over-provisioned NAND is required to shift around and consolidate some data and free some blocks (this mostly happens when the drive is nearly full). It is also moved when a block can't be erased or programmed due to excessive use, in which case the data is read from that block, stored elsewhere, and the failed block is marked as bad and never written to again. In this event, a block in the over-provisioned NAND is activated to ensure that drive capacity remains the same.

    Defragmentation is useful for different reasons to a HDD. Defragging a HDD reduces drive head movement because access time is horrendously slow otherwise. SSDs have the same access time for every cell so this is not an issue. The "problem" with SSDs is that they do not write sequentially. The controller selects where to write by "asking" two basic questions (there's probably alot more technical stuff behind it but let's not make things more complicated that necessary):

    1) Are there any free blocks (ie. any blocks that have never been written to or have been erased so that they are filled with all 1's)?
    2) Which blocks have the lowest erase count?

    If you (hypothetically) have an SSD with only 2 blocks – one has been half filled while the other is free – and you wish to write half a block's worth of data, the controller will write to the 1 free block rather than erasing and reprogramming the partially filled block, as part of industry standard wear levelling (note that a block cannot be programmed twice without first being erased). This is why SSDs are extremely fast for a time. When all free blocks visible to the OS have been used up, the drive will begin to slow because most writes will require a program/erase cycle and erasing is very slow compared to simply programming a free block.

    What defragging will do is grab data from partially filled blocks, write that data to another partially filled block to fill it, have the first block marked as "invalid" and then allow you to use another program to "erase" the first block (write all 1's to it) so that rather than having two blocks that will require a program/erase next time they are written to you now have only one and one free block that runs at "full speed."

    You must use the correct defrag algorithm for SSDs though. You do not want the defrag program to try and consolidate bits of the same file as this is not necessary due to the almost zero access time. You want it to consolidate any data in any location. After this you want to write all 1's to the blocks that are now "invalid," which is in essence what TRIM causes.

    For info on how to do this check here: http://www.ocztechnologyforum.com/forum/showthread.php?t=64753

    I realise I am well outside of the scope of this thread now :ange:
  14. My 2 cents, fwiw:


    Take a careful look at the CONTIG freeware:

    http://technet.microsoft.com/en-us/sysinternals/bb897428.aspx


    We've played around with it, and found a sweet spot
    by doing the following:

    (1) all of our hard drives have a primary partition that
    is identical in size to the C: system partition e.g. 20-30GB;

    (2) on a secondary HDD, we first format the primary
    partition to be identical in size to C:

    (3) after formatting, we disable the Indexing Service on that partition;

    (4) we then run the CONTIG software from Command Prompt:

    contig -v -n P:\pagefile.sys 6144000000



    attrib P:\pagefile.sys +A +S +H

    where,

    "P" is the drive letter of the Partition in question;

    (5) optionally, DEFRAG that new partition,
    if you're not satisfied with its exact location:
    defragging may need to be done before
    attributes are changed (try it both ways);

    (6) move your swap file to P:\pagefile.sys


    (Remember that partitions start and end
    on cylinder boundaries, and logical sector addresses
    ascend within the first cylinder, until all platter surfaces
    have sequential sector addresses, then inwards
    towards the spindle axis, in cylinder sequence.)

    The above procedure results in moving the swap file
    to the lower sector addresses found in that
    primary partition on your secondary HDD(s).

    That in turn results in effecting "short strokes"
    on the read/write armature.

    Also, to maintain the same or similar recording density
    across all platter surfaces, inner tracks are necessarily
    much slower than outer tracks in direct proportion
    to their geometric circumferences. This is yet another,
    frequently overlooked reason why "short strokes"
    are superior: the data rate is FASTEST at the outermost
    cylinders. Just run HDTune to see a graph of this behavior.


    Coupled with a high-performance HDD, preferably
    one that uses perpendicular magnetic recording ("PMR"),
    the above procedure results in quite adequate performance
    without risking the penalties that can occur when
    frequent WRITEs are done to a Nand Flash SSD.

    Funds permitting, you may want to create this
    custom pagefile.sys on a 10,000 rpm VelociRaptor ("VR")
    or similar high-speed HDD, for added performance.

    In light of the latter, 2 x RE3 HDDs in RAID 0 will further
    enhance the performance of the partition where
    the new pagefile.sys resides. And, the remainder
    can be formatted as a relatively fast data partition e.g.
    2 x 750GB RE3 HDDs - 50GB for the primary partition
    leaves 1.45 TB for a dedicated data partition (1500 - 50).

    (You can't have that much capacity with SSDs
    without spending an awful lot of money.)


    Be advised that WD's current VRs do NOT use PMR, however,
    so their RE3 and RE4 HDDs are performing on a par with VRs:

    http://www.wdc.com/en/products/products.asp?driveid=503


    In general, we have found that the latest products
    being manufactured by Seagate and Western Digital
    do exhibit reliability that is vastly improved over
    the products made, say, 5+ years ago.

    If you're not happy with this swap file management,
    you can always move pagefile.sys somewhere else,
    or allow Windows to manage it by default.


    I just wanted you to know that there are alternatives
    available to you -- like CONTIG -- that do not cost anything more
    than a little time to learn and experiment. For example, you can
    easily play around with CONTIG withOUT moving your swap file,
    just to get some experience with it.

    Another, similar freeware program is PageDefrag:

    http://technet.microsoft.com/en-us/sysinternals/bb897426.aspx


    I hope this helps.


    MRFS
  15. From Randomizer
    @RetiredChief: Where you expecting degradation in just 2 months? I expect my hardware to at least last the warranty period unless I abuse it.

    There are a lot of SSD's out there that will develop considerable degragation well before warrent runs out. Those that are not trim awear and/or are used in XP/Vista operating system. Some do have a utility that mimics Trim Cmd that will have to be run manually and depends on use

    With a Trim enabled SSD and Win 7 - Answer No
    Bear inmind that only a realtively few SSD are trim enabled. Intel only recently provided a firmware update for G2 (0AHD) that enabled trim cmd. And I have not seen any degragtion (Over two weeks). Do not expect any - Win 7 and 0AHD firmware.

    My Pateriot Torqx (Also Trim enabled) was in a win7 RC (non-trim) single Drive Laptop enviroment, and yes there was some degragation over short period of time (important to note this was small change and not noted in normal operation). Used patriot.exe on a weekly basis (Read someware that this was the recommended interval in a non-trim enviorment).

    PS - good explaination
  16. > Were you expecting degradation in just 2 months?
    > I expect my hardware to at least last the warranty period unless I abuse it.


    With Nand Flash SSDs, the kind of degradation which has been documented
    is not a "hardware" degradation, so much as a drop in performance
    particularly when doing WRITEs.

    And, yes this particular degradation happens more or less rapidly
    as a function of how many WRITEs are performed, and which kind --
    random or sequential. Thus, the measured drop can happen
    LONG before the warranty period runs out!

    AnandTech.com published a pretty good explanation here
    (there are others):

    http://www.anandtech.com/printarticle.aspx?i=3631

    ... particularly this discussion and what follows:

    http://www.anandtech.com/storage/showdoc.aspx?i=3531&p=8


    MRFS
  17. MRFS said:
    With Nand Flash SSDs, the kind of degradation which has been documented
    is not a "hardware" degradation, so much as a drop in performance
    particularly when doing WRITEs.

    I'm well aware of this drop in performance, but I don't call it "degradation" because that's a rather harsh term that I've never seen applied to HDDs even though the same thing happens to them (different cause, same end result). TRIM is not going to help if you don't delete any data though. If you repeatedly write data to the drive, including "overwriting" without deleting, TRIM will not prevent slow-down.

    If you really want a warm fuzzy feeling from high speeds when you run a benchmark like ATTO you need only delete 250MB of data and it will look as though you've got a fresh drive again.
  18. My 2 cents:

    1) leave the page file on the SSD. If you have a page fault, that is a synchronous event, ie: the program waits until it is satisfied, so the faster it is resolved, the better. With adequate ram, there is little need for a page file anyway. Just install enough ram--it's cheap these days. With adequate ram, the number of writes to the page file will be minimal. The SSD will get wear, only when a VERY large number of writes are sent to it. From the numbers I have seen, it is doubtful that the typical user will wear out their SSD in ten years. I would allow windows to pick the page file size. Only if I was really short on space would I bother to restrict it's size.

    2) Don't defrag the SSD. It is not necessary because locality of file blocks does not matter the way it does with conventional hard drives.Windows-7 will detect a SSD by examining it's performance and exclude it from defrag operations. For other OS'es, disable it yourself.

    3) Leave indexing on. data is indexed using low priority I/O and cpu. With a good SSD, the benefit is less than on a hard drive, but why not operate as fast as possible?

    4) In general, leave everything at the defaults unless you have a problem. Life is simpler that way.
  19. geofelt said:
    2) Don't defrag the SSD. It is not necessary because locality of file blocks does not matter the way it does with conventional hard drives.

    Read my wall of text above and you'll see that fragmentation is an issue with SSDs, but a different issue.
  20. Our LAN of high-performance workstations all perform better,
    on average, with Indexing Service DISABLED.

    We also noticed that the Indexing Service was causing
    problems for our ramdisks enabled with RamDisk Plus software
    from www.superspeed.com . Without the Indexing Service
    our ramdisks work perfectly:

    http://www.supremelaw.org/systems/superspeed/RamDiskPlus.Review.htm


    As their lower sector addresses fill up, all HDDs will "degrade"
    in direct proportion to the track circumference changes,
    holding recording density more or less constant.
    Cf. HDTune.

    Thus, if an innermost track is one-half the circumference
    of an outermost track, the raw data rate after armature seeks
    will drop by 50%. Yes, I call that "degradation" -- analogous
    to slowing the rpm from 7,200 to 3,600 :)


    MRFS
  21. Indexing doesn't need to be disabled, it should just be disabled for the SSD. Of course, there's no harm in disabling the service altogether, and some people find it slows down their system. I haven't noticed this slowdown, but I don't have much data either.

    I see that someone does call it degradation with regard to HDDs, but if you look around most forums you don't see it used nearly as often as it is in reference to SSDs. The issue is overstated as though SSDs have some serious flaw that HDDs do not when in fact they are functioning exactly how they should be. This is what I was getting it :)
  22. You have absolutely NO use for a page-file - disable it completely asap. Trust me, especially if your using W7, it's smart enough to APPRECIATE all that ram you have istalled - - - Vista/XP too stupid and become cranky over time.
  23. You can't disable the page file in Vista or Win 7.
  24. randomizer said:
    You can't disable the page file in Vista or Win 7.

    Yes you can (these instructions are for Windows 7, I assume Vista is similar):

    -> Click on the "Start" orb
    -> Right-click on "Computer" and select "Properties".
    -> Click the "Advanced" tab, then click the "Settings" button in the "Performance" section
    -> Click the "Advanced" tab, then click the "Change..." button in the "Virtual Memory" section
    -> Uncheck the "Automatically manage paging file size for all drives" checkbox
    -> For each drive in the list, click to select it, click the "No paging file" radiobutton, then click the "Set" button.
    -> Click "Yes" to the "If you disable the paging file..." messagebox.
    -> Reboot.
  25. That doesn't disable it, that just means the OS will use the system drive for the page file as necessary.
  26. randomizer said:
    That doesn't disable it, that just means the OS will use the system drive for the page file as necessary.

    That's an interesting statement. Do you have any links confirming that, as it's something that I have never seen stated by Microsoft. Indeed, in this link they imply that you can (but shouldn't) disable the paging file.
  27. randomizer said:
    That doesn't disable it, that just means the OS will use the system drive for the page file as necessary.
    The system drive also has the page file disabled:



    Trust me, this really, truly disables the pagefile. I just ran multiple copies of a program that uses up a great deal of memory so that even on my 12GB system all of the memory would be consumed. With no page file, what eventually happens is that the screen goes black because the desktop window manager itself ends up without any memory to use.

    I was able to use "Alt-Spacebar, C" to blindly close my test programs which freed enough memory to get the DWM working again, and the system carried on as if nothing had happened. And in all this time no pagefile was created on the system drive.
  28. As a matter of fact, no, I have no links confirming it. I don't remember where I read it (and I know I read it in more than one place) and I should really have added a note to take it with a grain of salt.
  29. Although I have now read it, I will also take it with a pinch of salt.
  30. sminlal said:
    Trust me, this really, truly disables the pagefile. I just ran multiple copies of a program that uses up a great deal of memory so that even on my 12GB system all of the memory would be consumed. With no page file, what eventually happens is that the screen goes black because the desktop window manager itself ends up without any memory to use.

    I see, it seems my info is a bit outdated then. I am still trying to find where I read it but so far I haven't turned up anything.
  31. It's true that disabling the pagefile doesn't actually turn off paging or virtual memory support. Program code doesn't change at execution time - this means that when the system gets low on memory and pages containing program instructions are kicked out, they don't really need to be written to the pagefile. Instead, the memory pages are simply marked as "free" and if the system needs to reload the code it does so directly from the EXE file.

    Not only that, but a lot of programs do file I/O by mapping a portion of their virtual address space to the file, leaving the gruntwork to be done by the paging subsystem. It's actually a very efficient way to do I/O if the file will fit into the program's virtual address space.

    So disabling the pagefile doesn't disable paging. But it really does get rid of the pagefile.
  32. Paging and page file are not quite the same thing though. The page file ensures that memory requests are fulfilled even when there isn't sufficient physical RAM (with enough processes running there may be no memory pages that are elligible for deletion - then one runs into the sort of situation described above or, eventually, a system crash). And, of course, not all memory pages hold data from .EXE files, or from any disk file.
  33. Do everthing with your SSD except Defrag. SSD was designed to work super fast and the page file or setting up partitions is exactly the same as a standard HDD. Just don't DEFRAG.

    Edited:

    I forgot to mention that you should 'leave you file indexing on as well'. Some HDD backup software rely on file indexing to copy your various backup methods.

    If you want a faster computer than you need an expert to optimze the system using all advanced Windows Settings. This does not include overclocking your CPU.

    I have optimized over 1000 systems since 2002 remotely from my home computer. I know the scoop. Tech-heads can get very carried away with too many tweaks. There is a simple routine for getting the most Oomph out of your computer as it is designed to give you. So keep googling and find the advanced settings adjustments and you'll be fine. You can contact me too if you like, but to post everything in here would take two web pages. :o
  34. Ok, I too have an Intel 160GB2M for my C drive and is dedicated for OS system only. I removed the paging file and when I did this MassEffect2 (loaded on my E: drive) will stop playing shortly after game start when the keyboard & mouse stop working. I then went back and put the min of 400K and it runs OK. I guess you need a page file.

    Tom
  35. Some programs don't play nice without a page file. Photoshop 6 was in that category. But modern systems can easily have enough memory to make a pagefile redundant, and if I knew that a particular program wouldn't work without a pagefile then I'd buy something else instead.
  36. I don't understand why some want to disable or delete the page file. It is there for a reason. XP used it to compensate for insufficient ram. Vista and windows 7 use it less because they utilize the larger ram sizes available today. To fiddle with it says you are smarter than the OS developers. That may be, if you really understand your program and environment. But why bother? What do you gain? What do you lose?
  37. geofelt said:
    I don't understand why some want to disable or delete the page file. It is there for a reason. XP used it to compensate for insufficient ram. Vista and windows 7 use it less because they utilize the larger ram sizes available today. To fiddle with it says you are smarter than the OS developers. That may be, if you really understand your program and environment. But why bother? What do you gain? What do you lose?


    So you're not thrashing the *** out of your expensive SSD?

    Yes, XP used it to compensate for insufficient RAM, but with 12GB RAM now reasonably affordable the page file isn't necessary. Some may want to reclaim the disk space, others may want to preserve their SSD in light of the technically true but dubiously supported belief that SSDs will die in 6 months if you write to them too often.

    It's all about options. Personally, with 12GB RAM I'll have a page file set to minimum on a HDD just so any apps expecting it won't complain, but I doubt I'll be hitting memory issues. If I do, I'll just whack a 1GB page file on my SSD OS RAID and by the time the drives die SSD tech will be much more robust and affordable so it's easy to replace.
  38. If you have enough ram, you won't use the page file. If you don't have enough ram, then you need one. Let the OS figure it out. If you are short on space on a SSD, then you might want to keep the page file size to a minimum.
  39. oh i entirely agree, but for some they'd just prefer to kill the pagefile and get some space back if they're sure they won't need it.

    Each to their own really
  40. geofelt said:
    To fiddle with it says you are smarter than the OS developers.

    Maybe we are. The Linux kernel only needs 256kB under circumstances where a page file is expected, but you can leave it out completely if you want. This is because it doesn't write to the disk unless it's actually necessary. Windows writes to the page file when it decides that it's necessary (ie. pretty much all the time).

    Anonymous said:
    So you're not thrashing the *** out of your expensive SSD?

    If you don't thrash your SSD, you might as well be using a HDD :)
  41. geofelt said:
    If you have enough ram, you won't use the page file. If you don't have enough ram, then you need one. Let the OS figure it out.
    The problem is that the OS is brain-dead when it comes to figuring out pagefile sizes. It typically defaults to 1.5 X the size of RAM, which is ludicrous on a 12GB system unless you're running an insane amount of stuff.

    Windows does this because it takes a "better safe than sorry" approach. That approach was really important when systems were memory-bottlenecked. But RAM is extremely cheap these days, and it's easy to build a system that has more RAM than you'll ever need.

    What it boils down to is this: use your system to do your typical work for a week or two and make a note of the highest amount of committed memory that you've ever consumed. Add, say, 25% or even 50% as a safety margin. If that works out to be smaller than the amount of RAM you have, then a pagefile is superfluous. Completely. There is no reason in the world that you need one - period.
  42. I guess the 6 GB @ 1600 Mhz of Ram on my system doesn't cut it. Looks like I should go to 24GB and max out my MB so I don't overwrite my SSD. But as you say I'll a new 500 GB SSD probably by the end of this year for about what I paid for the 160 and it will have a longer life.
  43. 6GB is fine if the programs you run don't use up more than that. And don't forget that for SSDs, the downside to a lot of memory is that you end up with a large "Hiberfil.Sys" in your root directory that you can't move to another drive. The only way to eliminate it is to disable sleep mode, which you may not want to do.
  44. You only have to disable hibernate. You can still use sleep mode.
  45. aipongkooja said:
    You only have to disable hibernate. You can still use sleep mode.
    Technically you're correct. However the default sleep mode is "hybrid sleep mode" which I tend to prefer over standard sleep mode because the standard mode depends on continuous power to RAM to maintain the state of suspended programs. In hybrid mode the hibernation file is still required to store a copy of memory contents so that in the event of a power interruption you can still resume your work after it's reloaded from disk.
  46. Yeah I can see the benefits of it, if you tend to leave things open when you sleep/hibernate, or especially for a laptop for that little extra power savings. But for people that just use it as a "soft shutdown" to save power, noise, and/or component life (fans etc), you don't have to worry about turning off hibernate because you can still use sleep. That's all I really wanted to convey. (Mostly because it took me forever to google whether or not I could still use sleep mode if I disabled hibernate. I spent a month with hibernate disabled and telling my computer to never go to sleep because I thought they were the same thing, lol) :)
  47. Wow, this thread has been fantastic. Thanks especially to sminlal and randomizer for all the information they shared. I just ordered 2 Intel X-25M SSDs and came to Tom's Hardware to make sure I was prepared to set them up correctly. From the information in this thread I pretty much know exactly what I need to do.

    I just have one question left. I saw on another thread (http://www.tomshardware.com/forum/256086-32-which) someone advising to partition the drive shorter than its capacity (around 10 - 20 GB) "making sure that the last 10-20GB are never in its lifetime written to."
    Is this sound advice?
  48. emeraldyautja said:
    Wow, this thread has been fantastic. Thanks especially to sminlal and randomizer for all the information they shared. I just ordered 2 Intel X-25M SSDs and came to Tom's Hardware to make sure I was prepared to set them up correctly. From the information in this thread I pretty much know exactly what I need to do.

    I just have one question left. I saw on another thread (http://www.tomshardware.com/forum/256086-32-which) someone advising to partition the drive shorter than its capacity (around 10 - 20 GB) "making sure that the last 10-20GB are never in its lifetime written to."
    Is this sound advice?


    I don't think so.

    The purpose of doing so is to insure that there are always some spare chips available.
    This will prolong the life of the SSD and improve the efficiency of writes.

    Every SSD already allocates spare chips for these purposes. The 80gb Intel drives already have 16gb in spares which are not available to you.
    In addition, for normal desktop usage(vs. server intensity) they should last about 1,000,000 hours. That is about 100 years; long after the drives become obsolete.
    By lasting, it means that you will get failed writes, but your data is still readable, giving you plenty of time to address the issue.

    You paid for it, go ahead and use it all.
  49. geofelt said:
    I don't think so.

    The purpose of doing so is to insure that there are always some spare chips available.
    This will prolong the life of the SSD and improve the efficiency of writes.

    Every SSD already allocates spare chips for these purposes. The 80gb Intel drives already have 16gb in spares which are not available to you.
    In addition, for normal desktop usage(vs. server intensity) they should last about 1,000,000 hours. That is about 100 years; long after the drives become obsolete.
    By lasting, it means that you will get failed writes, but your data is still readable, giving you plenty of time to address the issue.

    You paid for it, go ahead and use it all.



    Awesome, thanks geofelt. I am glad I don't have to do that, and also glad that Intel's engineers thought of that. Wow, SSDs have come a long way, but I am excited to see how much further they can go.
Ask a new question

Read More

Configuration SSD Hard Drives Windows 7