SSD Performance: unallocated space vs. no unallocated space . . .

Robtl

Reputable
Sep 13, 2014
6
0
4,520
Hello All,
I currently have two Toshiba Qosmios, both of which boot from an SSD. The earlier one is an X505-Q870 that I "built" about 4 years ago using an Intel X25M 128 gb SSD. It uses a 750 gb 5400 RPM conventional HDD for data storage. The processor is an Intel Core I7 1.6 gig and it has 8 gig of memory running Windows 7 home premium 64 bit.
The "new" one is a Qosmio ABT3G22. It is booting from a Samsung 850 Pro 128 gb SSD and uses a 1 tb 7200 RPM HGST conventional HDD for data storage. The processor is an Intel Core I7 2.4 gig and it also has 8 gig of memory, but instead runs Windows 7 professional 64 bit.
When I set the new one up, initially I "overprovisioned" 12.11 gb of space (a tiny bit over the 10% mark), which left 106.73 gb for the "C" partition. 56.61 gb of the "C" drive is filled.
Last night, I ran the ATTO Disk Benchmark application (with all settings at their default values), and then I used EaseUS Partition Master to increase the size of the "C" drive so that there was no unallocated space (the full 118.84 is now used for the "C" drive). I then ran the ATTO Disk Benchmark again.
I was surprised by the results! With no "overprovision", the drive is SIGNIFICANTLY faster! I have read that the read/write for a 4 mb block size is the true measure. The read/write 'before' was 569638/404701 and 'after' is 633675/430301. The fastest read and write speeds were with a 1024 mb block size. With overprovision, the read/write numbers were 2811543/1539654. Without overprovision, the numbers are 3504207/2684354!! I can post the charts if someone can tell me how that is done . . .
As a final test, I allowed Samsung's "Magician" software to "unallocate" exactly 10% of the C: drive, and then reran ATTO. The benchmark results were EXACTLY the same as on the first run when I manually set 12.11 gb.
I then used EaseUS Partition Master to return the "C" drive to the full 118.84 gb.
For curiosity, I ran " fsutil behavior query DisableDeleteNotify" at a "run as administrater" command prompt and a "0" was returned, which means TRIM is turned on. I then ran Trimcheck and the program returned: "TRIM appears to be working . . ."
So, any thoughts on this? My inclination is to leave the "C" drive at 118.84 gb as it provides better performance . . .
By the way, on the older machine, no overprovision was ever done.
 

RealBeast

Titan
Moderator
You do not need to provide additional space for the SSD controller to deal with wear leveling, etc. for consumer use of an SSD.

Most manufacturers reserve around 7% of actual total physical NAND space for controller use, although many software programs that allow for additional over-provisioning do not see this reserved space as it cannot be used for other than controller operations.

It is odd that you got poorer performance with increased OP, as it should (at least theoretically) improve performance although on small size drives limiting the available NAND seems that it could adversely impact speeds. In the real world there is very little performance difference and the only good reason to OP is on enterprise use SSD that will see terabytes of writes weekly or monthly. Many of my older SSDs (the original Intel X25M 80GB) now used as scratch drives are over 50-100TB of writes and all are going strong and still indicate 100% life.

tl;dr version: you should use all of the space and don't OP beyond that done in the factory that cannot be seen.
 

Robtl

Reputable
Sep 13, 2014
6
0
4,520
RealBeast, thanks for your thoughts!
I did these tests because I had "heard/read" that the manufacturers "op'd" 7% of the drive and that it was hidden.
I follow your rationale on the enterprise situation being different.
I find it interesting that Samsung's "Magician" program grabs an additional 10% of drive capacity.
I intend to 'not' op any additional space . . .
Thanks again.