Sign in with
Sign up | Sign in
Your question

SSD- fragmentation and indexing?

Last response: in Storage
Share
Anonymous
a b G Storage
June 9, 2011 11:09:09 AM

I don't have an SSD... YET

My question is what are all the things that you need to disable to extend the life of your SSD,
defrag, indexing, what else, because I use my PC for at least 5 years before buying a new one, and most companies rate their SSD lfiespan to be 5 years.

SSDs having a limit to the no. of read/writes don't seem to be promising,
I have this mega old PC running triple 5GB hard drives that have been running for 10 years maybe
June 20, 2011 10:14:47 PM

Defragmenting an SSD's is not recommended as the defragmentation process itself incurs a lot of writes which is what wears out the NAND cells used by SSD's. Furthermore, the process of defragmenting an SSD is counter-productive, as the SSD controller tries to align data to cells that have been written to the least - this is called wear leveling - defragmenting counters this by re-allocating data.
Related resources
a c 143 G Storage
June 21, 2011 1:30:03 AM

^+1...

I agree with Gene O and WyomingKnott... As my article showed (link provided by WyomingKnott), most tweaks provide little to no performance benefits outside of decreasing unnecesary writes. You get the best benefit in the capacity you again from the tweaks (though there are many sides to the pagefile debate :D  ).
June 21, 2011 3:06:03 AM

WyomingKnott said:
This article: http://www.tomshardware.com/reviews/ssd-performance-twe... starts with some good tips that I first saw in a guide by our member Tcmo34. Defragmentation is absolutely unneccessary with an SSD. No upside possible.

I was having slower boot times than expected right after I got my SSD, and defrag'd once to see if there would be a difference. There was, it booted much faster. There is no upside to doing it on a regular basis like there is on an HDD, but one time after major installs helps quite a bit from what I've seen. Sequential reads are still much faster on an SSD.
a b G Storage
June 21, 2011 3:37:29 AM

Quote:
I don't have an SSD... YET

My question is what are all the things that you need to disable to extend the life of your SSD,
defrag, indexing, what else, because I use my PC for at least 5 years before buying a new one, and most companies rate their SSD lfiespan to be 5 years.

SSDs having a limit to the no. of read/writes don't seem to be promising,
I have this mega old PC running triple 5GB hard drives that have been running for 10 years maybe


Well, I'll be assuming you have windows 7.
The Main things you want to disable, not necessarily due to extension of life, but more of something that is unnecessary.
-Never ever defragment, disable it
-Disable superfetch
-Disable Prefetch
-Disable File Indexing

Some other things you should consider are:
Disable hibernation file
Disable Page File (windows barely use pagefile if not at all if you have sufficient ram anyway, but SSD's are expensive and a pagefile and hibfile take up LOTS of space, so either disable it or move it to another HDD, or shrink it.

^It won't really increase life, but it will free up system resources because these resource intensive tasks are simply not need on an SSD, on an HDD they are helpful though.

Quote:

SSDs having a limit to the no. of read/writes don't seem to be promising


Its not like that. Imagine a car (HDD) that has an unknown limit to how many miles it can do before breaking. Imagine this car as unreliable and breaking after lets say 100,000 miles usually due to failure. Now Imagine a Car that has a "known" limit of exactly 1,000,000miles before failure, and is far more reliable and is almost garenteed to reach that mark before failing. The point is that, just because the SSD has a limit doesn't make its life shorter then an HDD, infact SSD's are 1000x more reliable and can last far longer then an HDD due to better vibration tolerance, temperature tolerance, and no mechanical parts.
a b G Storage
June 21, 2011 3:38:55 AM

Nakkiel said:
I was having slower boot times than expected right after I got my SSD, and defrag'd once to see if there would be a difference. There was, it booted much faster. There is no upside to doing it on a regular basis like there is on an HDD, but one time after major installs helps quite a bit from what I've seen. Sequential reads are still much faster on an SSD.


On an SSD each block is accessed at the same time, on an HDD there is a difference on the platter, but on the SSD there isn't so defragmentation is useless. I highly doubt that was the reason. Its either a psychological factor or perhaps it was configuring some update on the first boot. Just a doubt.
a c 301 G Storage
June 21, 2011 1:06:43 PM

Nakkiel said:
I was having slower boot times than expected right after I got my SSD, and defrag'd once to see if there would be a difference. There was, it booted much faster. There is no upside to doing it on a regular basis like there is on an HDD, but one time after major installs helps quite a bit from what I've seen. Sequential reads are still much faster on an SSD.

While most people will probably disagree with this, I once tried to "defragment" in the following way. It works beautifully for HDDs, and doesn't mess up the list of available blocks on the SSD.

Take an image backup
Do a secure erase
Restore the image backup. On an HDD, this eliminates fragmentation entirely, since the tool I use restores file-at-a-time when it can identify blocks as belonging to files.

A question to the more knowledgeable: SSDs are notably faster on serial reads than random reads. If a file appears to the OS to be contiguous, but the blocks are scattered around the SSD by wear-leveling, which read speeds are going to apply: serial, as it appears to the OS, or random, since it is going to different locations around the SSD?
a b G Storage
June 21, 2011 2:16:40 PM

WyomingKnott said:
While most people will probably disagree with this, I once tried to "defragment" in the following way. It works beautifully for HDDs, and doesn't mess up the list of available blocks on the SSD.

Take an image backup
Do a secure erase
Restore the image backup. On an HDD, this eliminates fragmentation entirely, since the tool I use restores file-at-a-time when it can identify blocks as belonging to files.

A question to the more knowledgeable: SSDs are notably faster on serial reads than random reads. If a file appears to the OS to be contiguous, but the blocks are scattered around the SSD by wear-leveling, which read speeds are going to apply: serial, as it appears to the OS, or random, since it is going to different locations around the SSD?


On a hard disk, accessing different locations may mean moving the heads, and that makes a big difference to the speed. That's one of the reasons we defragment (the others are a bit more esoteric).

On an SSD, there is no speed penalty for accessing any block - it is a truly random access device. There can be a bonus for locality of access, in that the next "sector" is already in the cache (the cache doesn't receive one sector at a time - it gets larger blocks). However, the wear-leveling works at the block level, not at the sector level, so I'd expect files which the OS sees to be contiguous would still be contiguous as far as the cache is concerned.

I don't think anyone has written a defrag tool that can look beneath the wear levelling to do a defrag that would place files on contiguous tool. It would be made even more complex by the compression used by Sandforce drives.
a b G Storage
June 21, 2011 4:56:19 PM

WyomingKnott
Quote:

A question to the more knowledgeable: SSDs are notably faster on serial reads than random reads. If a file appears to the OS to be contiguous, but the blocks are scattered around the SSD by wear-leveling, which read speeds are going to apply: serial, as it appears to the OS, or random, since it is going to different locations around the SSD?


Sminlal or cjl can answer that, they are the most knowledgeable on SSD's that I've seen so far. However I'll give it a shot. I think that the hardware level of where the blocks are located will override what the OS thinks, so if the blocks are scattered on the SSD physically, then I think access time is what is going to be at play. I'm not sure though.

CompulsiveBuilder:
Quote:

I don't think anyone has written a defrag tool that can look beneath the wear levelling to do a defrag that would place files on contiguous tool. It would be made even more complex by the compression used by Sandforce drives.


Im not sure but Diskeeper Hyperfast claims to do exactly that I think. IDK how well it works if at all, but they claim to realign the blocks for faster access and faster sequential reads I think.
!