iii iii iii

111 111 111 111 111 111
19 answers Last reply
More about 16gb pagefile
  1. if you need a pagefile, i.e. if you are using a lot of ram, then put it on the HDD and leave it large, otherwise, make it really small 1GB fixed, preferably also on the HDD.
  2. If I were you, then I'd certainly disable the pagefile on the SSD. Shift it to the next fastest drive, preferably the platter drive. And I'd let the system manage it.
    For graphics processing the 2GB on the 560Ti is ample enough and 16GB of RAM is for most purposes overkill.
    So you really don't need a pagefile so to say, but , to be on the safe side you could give the same 4-8GB pagefile on the other drive.
  3. You cant just not have a page file as some things require one for compatibility even if its not actually going to use it.
    I would reduce it in size and put it on the HDD as others have said.

    Mactronix :)
  4. Thank you guys for the quick suggestions, looks like
    I'll be moving the pagefile.sys off the SSD to the 2nd
    hard drive & as I have the space on it let windows
    self manage it again.

    thank you 13thmonkey, alyoshka, mactronix
  5. I recently built a system and set up a 2-disk RAID0 scratch disk, I dumped the swapfile (system managed) on that and it flies, the performance gain was well worth the cost and effort.
  6. don't let it self manage, fix the size, it'll stop it fragmenting on the HDD

    on SSD's thats a whole different issue.
  7. @13thmonkey, do you mean this on a raid array or hard disks in general.
  8. i think both (I don't believe that raid makes a difference to how space is allocated on a HDD), hdd's update the same sector the data was originally in, but if the pagefile grows then it needs extra space, and if there is no space just after the end of the pagefile then it fragments. SSD's as you know can't update the same spot and hence have a different issue.
  9. Thanks again everybody, 13thmonkey thank you for the extra tip to prevent
    it fragmenting, could you suggest a size I've over 230GB free on the 500GB
    SEAGATE so size isn't an issue I'm sure you can tell but I'm abit out of my
    depth here.

    My normal daily workflow is pretty heavy that's why I went with the highest
    specs I could afford. Normally I'd have Illustrator, photoshop & indesign all
    running at the same time, really I just kind of think of them all as one
    massive program as they each have different strengths so I'd be threading
    different elements through them with the linked feature. Plus I've taken a
    real liking to Zbrush so it could soon be music & pushing a file back & forth
    through Zbrush & photoshop.

    Actually just thinking I have all the scratch disks for all the adobe software
    going to the 2nd hard drive that I'll be moving the pagefile.sys to will that
    have any effect worth mentioning if windows is using the pagefile.sys &
    photoshop & Illustrator are using their scratch disks?

    Thanks again
  10. i'd say go with 16GB.
  11. ok thanks for that, with ssd the wear leveling algorithms won't allow the same cells/sectors to be used until they have to be used.
  12. but if you change 1bit of data, its a whole page read-write-assign ready for erase for an ssd, for a hdd it just changes that bit, once it finds its logical block. Hence for changing data at only a bit level ssd's aren't great, where data is being changed in bulk they are just as 'efficient' as hdd's. hence why I think that for pagefile they are not a great choice. I have 2x30GB OCZ solids, which I was using 1 for pagefile, the other for scratch. these are pre-trim and it choked and caused a blue screen. In a nearly full ssd with trim and little provisioning, the same could happen where the drive responds slowly whilst waiting for trim to kick in, or some garbage cleaning to work.
  13. And thats one of the main reasons I've never been keen on SSD drives, great for read operation but not so great for writing, until they sort out the write wear issues I'm only going to use them when absolutely necessary, the battery backed ram drive (I think gigabyte did it) option looked way more appealing I wonder why they never pushed it any further especially now ram being soo cheap.
  14. especially on the smaller processes where the number of cycles was halved from c10k to 5k writes. But the speed at which they are growing means that by the time my 128GB OS drive is even close to wearing (maybe 5 years) i'll be ready to buy something a lot bigger or faster, preferably both.
  15. that's true, but I always have this small niggling feeling of being ripped off due to the wear leveling issue, I know things inevitably cease to function over time but planned eventual failure shouldn't be a how things are designed.
  16. If possible, I'd create a small partition on the front nof the HD and put ya page file, scratch files, temp files there. Why ?

    1. The outside edge of the HD is twice as fast as the inside edge.

    2. Create the page file at a fixed size so as not to result in it becoming fragmented.

    3. Adobe, Autodesk and other programs that manipulate large files do a lot of writing to disk.

    4. In the past, the recommendation was always to make the page file partition FAt32 because the NTFS file protections are unnecessary on temp files and it eliminates the overhead associated with those protections. I'm not aware of any comparisons under Windows 7 tho.
  17. Creating a scratch partition is a good idea (in Linux it's required) especially so you can move all your Adobe scratch files off the drive where the programs themselves are located.
  18. @JackNaylorPE thanks for your suggestion that's definitely something I'll be looking further into, I've never really thought about hard drives that way before.

    Seriously everybody the feedback I've got on this is just brilliant, I really appreciate everybody taking the time to post their thoughts, even if some of it is going completely over my head.

  19. hairystuff said:
    that's true, but I always have this small niggling feeling of being ripped off due to the wear leveling issue, I know things inevitably cease to function over time but planned eventual failure shouldn't be a how things are designed.

    It's not really planned eventual failure but more a weakness in the technology. I'd love a cheap 10TB SSD that can read/write at 1GB per second and never wear out, but it just can't be done!

    I'd recommend an SSD to anyone these days. They are cheap, and so much faster than a mechanical HDD that nowadays I will never use a computer that doesn't have one.

    Bear in mind that HDDs fail as well. But there's an important difference in HOW they fail. A dead SSD is one that you can read from but not write to. A dead HDD allows neither, with a significant risk of data loss. Plus the longevity of an SSD is far longer than the upgrade cycle - for me at least.
Ask a new question

Read More

Configuration Pagefile Windows 7