IBM Files Flexible Capacity SSD Patent
IBM filed a patent that describes an SSD that can be adjusted to its capacity as well as drive life.
The basic idea is that users can either choose to leverage the full capacity of the SSD or reduce the size and reserve some of the memory cells as a safety net when other memory cells fail. In effect, less capacity can provide a longer drive life.
IBM envisions that users can configure the desired drive life in combination with a minimum storage capacity, which will be done via the firmware of the SSD: "Based on the user configuration and the utilization, a portion of the SSD memory devices is allocated as available memory, and another portion of the SSD memory devices is reserved as overprovisioned memory, to be used as fallback when available memory devices reach their PE wear out threshold," the patent states.
During usage, a drive could be using self-monitoring tools and dynamically adjust storage space, depending on the actual use of the drive. "The proportion of available memory to overprovisioned memory may be adjusted if the utilization changes; as the SSD utilization changes, the controller may allocate or de-allocate available memory to meet the SSD drive life configuration. The SSD drive life is therefore predictable and adjustable."
Such features are extremely helpful in corporate environments, especially in areas where SSDs are used in database applications.

See here Apple, this is what a REAL patent should look like.
Agreed
While I like both the tone and the content of your post, it is a bit of a flame bait. That's why I am continuing it one bit:
Wow, a drive that changes capacity, that's imagical...
I guess somehow using the words SSD and 'spare area' make this patentable. What a joke....
This is about essentially letting you set aside some reserve space for when memory cells fail.
So that way you don't lose any of that data... If you were to use all 128gigs, and then have some of the cells fail, you just lost some of that stored information. This way, the ssd will (theoretically) detect any failures before they happen and transfer data accordingly. At least that is how I understand it.
I am not an expert but since SSD storage memory has certain a life span and may degrade over time a 128GB that is used as a 128GB will used all memory cells most of the time and those cell will degrade and might become inaccessible (died) around the same time.
A 128 that it is used as 64GB will only use half of the memory cells and when one of those becomes inaccessible because of degradation it will use one of the new ones you had in reserved; from the 64GB you did not used.
The problem here is that the drive (and the OS attached to it) thinks it has 128GB of usable storage space. If, say, 8GB have gone bad, it would really only have 120GB of storage space. The drive would need some way of telling the OS that, hey, I have 8GB of bad sectors, don't try to write more than 120GB of data. There's no such mechanism, and instead, when the OS writes data and there are only bad sectors left, the drive can't do anything and that data is lost and the drive corrupted. Remember, flash memory re-writes whole sectors at a time, so you might lose the middle sector of a file, and better hope it's not your actual file system that's lost.
This is the case REGARDLESS of how much memory is provisioned on the drive. If you have a 128GB drive, provision 64GB for backup, that leaves 64GB for data. IF 70GB of the sectors go bad, the drive is now only at 58GB of actual capacity, but still thinks it's at 64. The OS writes 58.1GB of data and boom, drive is essentially dead.
Intel's patent doesn't fix that problem. What it does is it lets the customer decide how important data preservation is to their particular application. Average customers might only need 10% provision. Flash memory can sustain thousands of writes, and what home user is going to write 128GB thousands of times to a single drive? However, corporate scenarios (especially databases) might write out to a drive millions of times a day. In this case, provisioning a drive at 20%, 30%, 50% might be more cost effective than replacing the drive after only 10% has failed.