Does using most of the space in an SSD degrade the SSD's performance permanently?

crazekid362

Distinguished
Feb 26, 2011
114
0
18,690
I am new to SSD's. I've read that having only a very small percent of storage for extended periods can be detrimental to performance. For example, having only 10GB available in a 120GB SSD for a long period. Will this permanently degrade performance in any way? Should I be setting aside a portion of memory that will never be used to avoid this scenario? Thanks.
 
Solution
I think it would also depend significantly on the particular SSD in question. I'm not sure if it would permanently degrade the drive's performance, but it would have an affect as long as it is that low. As a general rule, I think it's probably a good idea to leave at least 10% of the drive unused for best performance. I know that Samsung SSD's—the 840 EVO in particular—utilize what is known as "over provisioning" (OP) to permanently set aside a given percentage of free space for use by the controller, garbage collection, and spare blocks. From what I've read, Samsung 840 EVO's come with an unchangeable, factory OP setting of 7%. I have the 840 EVO 500 GB and Samsung's SSD utility, Samsung Magician, by default recommends at...

biohazrdfear

Honorable
Mar 1, 2013
340
0
10,860
I've got two people I know off hand that constantly run their SSD hard drives to the max. One person has about ten gigs avail, the other has like...one gig available. Both their systems are still running flawlessly, and they have no hard drive issues. Maybe they are lucky, who knows? But from what I can see, there is no performance degration by having that many gigs free. Now, your Windows OS may slow down, but that always depends on how you organize your files, etc.
 

scottlpz

Distinguished
Aug 11, 2011
18
0
18,520
I think it would also depend significantly on the particular SSD in question. I'm not sure if it would permanently degrade the drive's performance, but it would have an affect as long as it is that low. As a general rule, I think it's probably a good idea to leave at least 10% of the drive unused for best performance. I know that Samsung SSD's—the 840 EVO in particular—utilize what is known as "over provisioning" (OP) to permanently set aside a given percentage of free space for use by the controller, garbage collection, and spare blocks. From what I've read, Samsung 840 EVO's come with an unchangeable, factory OP setting of 7%. I have the 840 EVO 500 GB and Samsung's SSD utility, Samsung Magician, by default recommends at least 10% of additional free space for OP.

My "500 GB" drive has an actual usable capacity of 465.76 GB, which would be about right. 500 - 465.76 = 34.24 GB reserved by the disk from the factory, which works out to roughly 6.8% of the drive's overall capacity. I've also seen this calculated as total capacity (500) minus available user capacity (465.76)—which would work out to 34.24—divided by user capacity (465.76). Thus, the simplified equation would be 34.24 / 465.76 = ~7.4%. Magician's recommended 10% OP would reserve an additional 46.58 GB, giving me just under 420 GB to work with. That being said, I've read that the larger the OP the better (within reason), so I currently have mine set at 25%, or 116.44 GB. As I use the SSD solely for the OS, applications, and games (the vast majority of save games are located in "My Documents," which I have modified so that it is located on my HDD instead of the default path on C), 500 GB—or 349.22 GB after over provisioning in my case—is still far more space than I need. Even with Windows 7 x64 Professional SP1, the latest Windows Updates, Norton Internet Security 2014, Office 2010 Pro Professional, Adobe Master Collection CS6, Origin, Steam, and a few games, I'm still only using 81.4 GB of space. It's worth mentioning that the performance benchmark included with Samsung Magician has not indicated any advantage due to the increased OP size; as far as I can tell, it has no effect at all. Keep in mind that it's a synthetic benchmark, though, so that doesn't necessarily indicate real-world performance.

I may be mistaken, but I think OP is mainly a Samsung thing at the moment, and primarily for the EVO series, as they utilize multi-level cell (MLC) technology as opposed to the generally quicker and more reliable single-level cell (SLC) flash memory that many drives use (in my personal experience, Samsung's implementation of OP and DRAM-based "RAPID" [see explanation at bottom], combined with their excellent proprietary NAND controllers, greatly mitigate any performance loss due to the use of MLC). One of the key advantages of OP is that it sets aside spare blocks that the drive will not use. The upside to this is that as the drive begins to wear out, bad blocks can be substituted with the "virgin" blocks that were reserved by over provisioning.

This is a pretty decent explanation of how over provisioning works (FSP stands for flash storage processor):
"The portion of total NAND flash memory capacity held in reserve (unavailable to the user) for use by the FSP is used for garbage collection (the major use); FSP firmware (a small percentage); spare blocks (another small percentage); and optionally, enhanced data protection beyond the basic error correction (space requirement varies).

Even though there is a loss in user capacity with over-provisioning, the user does receive two important benefits: better performance and greater endurance. The former is one of the reasons for using flash memory, including in solid state drives (SSDs), while the latter addresses an inherent limitation in flash memory."

The above quote was taken from the EDN Network (formerly known as Electrical Design News, a monthly magazine that ceased publication in June 2013) at http://www.edn.com/design/systems-design/4404566/Understanding-SSD-over-provisioning. As far as explanations of OP go, this one is pretty good and not that difficult to understand, even from a layperson's perspective.

I realize that this doesn't directly address your question regarding permanent performance degradation due to a lack of free space, but I think it should help shed a bit of light as to why it makes sense to keep at least 10% or so of the total SSD's capacity unused. If you use nearly all of the SSD's space, then you're also going to wear the drive out quicker too, as you're writing and reading from almost all of the available blocks. Thus, once certain areas of the drive begin to wear out, there's nothing the controller/FSP can do to mitigate the problem. Even non-Samsung SSDs have a "garbage collection" (unless I'm mistaken, this is the highly misunderstood "TRIM" command), and I can only assume that most (if not all) modern SSDs come straight from the factory with at least some level of OP for the NAND FSP/controller. There are other reasons—mainly related to the way that Windows functions—as to why you would want to keep a certain level of free space, but the above explanation is a pretty good hardware-oriented answer to your question.
 
Solution