Sign in with
Sign up | Sign in

IBM Files Flexible Capacity SSD Patent

By - Source: USPTO | B 41 comments

IBM filed a patent that describes an SSD that can be adjusted to its capacity as well as drive life.

The basic idea is that users can either choose to leverage the full capacity of the SSD or reduce the size and reserve some of the memory cells as a safety net when other memory cells fail. In effect, less capacity can provide a longer drive life.

IBM envisions that users can configure the desired drive life in combination with a minimum storage capacity, which will be done via the firmware of the SSD: "Based on the user configuration and the utilization, a portion of the SSD memory devices is allocated as available memory, and another portion of the SSD memory devices is reserved as overprovisioned memory, to be used as fallback when available memory devices reach their PE wear out threshold," the patent states.

During usage, a drive could be using self-monitoring tools and dynamically adjust storage space, depending on the actual use of the drive. "The proportion of available memory to overprovisioned memory may be adjusted if the utilization changes; as the SSD utilization changes, the controller may allocate or de-allocate available memory to meet the SSD drive life configuration. The SSD drive life is therefore predictable and adjustable."

Such features are extremely helpful in corporate environments, especially in areas where SSDs are used in database applications.

Display 41 Comments.
This thread is closed for comments
  • 2 Hide
    bustapr , May 2, 2011 6:09 PM
    Seems logical and good. Too bad itll cost a fortune for while.

    See here Apple, this is what a REAL patent should look like.
  • 2 Hide
    Anonymous , May 2, 2011 6:11 PM
    Fantastic idea... hardly patent-worthy... and probably will be left on it's default setting by 99% of corporate users... but a fantastic idea all the same.
  • 0 Hide
    JohnnyLucky , May 2, 2011 6:13 PM
    Sounds like an interesting solution for some business enterprise situations. Would it also be practical for gamers and enthusiasts?
  • 1 Hide
    bhaberle , May 2, 2011 6:16 PM
    bustaprSeems logical and good. Too bad itll cost a fortune for while. See here Apple, this is what a REAL patent should look like.

    Agreed
  • -1 Hide
    rosen380 , May 2, 2011 6:27 PM
    Why would it be better to have, lets say, a 128GB drive that acts like a 64GB drive versus a 128 GB drive that acts like a 128 GB drive that degrades down to a 64GB drive at about the same time as the other one runs out of reserve memory?
  • 0 Hide
    virtualban , May 2, 2011 6:34 PM
    bustaprSeems logical and good. Too bad itll cost a fortune for while. See here Apple, this is what a REAL patent should look like.

    While I like both the tone and the content of your post, it is a bit of a flame bait. That's why I am continuing it one bit:

    Wow, a drive that changes capacity, that's imagical...
  • 1 Hide
    Gin Fushicho , May 2, 2011 6:53 PM
    I'm not sure I like the "Dynamic" idea, but the idea of changing it yourself sounds cool.
  • 1 Hide
    Anonymous , May 2, 2011 6:57 PM
    Man our patent system is so broken. This isnt a new idea, maybe with SSDs its new but not with HDDS. Weve been able to chose how to use the space forever. Choosing cluster size and partition size already changes how much real area you have on HDDs vs how much is reserved by the system, etc.

    I guess somehow using the words SSD and 'spare area' make this patentable. What a joke....
  • 1 Hide
    rosen380 , May 2, 2011 6:59 PM
    That isn't what this article is about. You can presumably partition any SSD however you'd like.

    This is about essentially letting you set aside some reserve space for when memory cells fail.
  • 1 Hide
    oparadoxical_ , May 2, 2011 7:03 PM
    rosen380Why would it be better to have, lets say, a 128GB drive that acts like a 64GB drive versus a 128 GB drive that acts like a 128 GB drive that degrades down to a 64GB drive at about the same time as the other one runs out of reserve memory?

    So that way you don't lose any of that data... If you were to use all 128gigs, and then have some of the cells fail, you just lost some of that stored information. This way, the ssd will (theoretically) detect any failures before they happen and transfer data accordingly. At least that is how I understand it.
  • 1 Hide
    rosen380 , May 2, 2011 7:06 PM
    Likewise though, if it detects a failue, it could just move the data to an unused cell [or a cell marked for deletion]...
  • 2 Hide
    jojesa , May 2, 2011 7:09 PM
    rosen380Why would it be better to have, lets say, a 128GB drive that acts like a 64GB drive versus a 128 GB drive that acts like a 128 GB drive that degrades down to a 64GB drive at about the same time as the other one runs out of reserve memory?

    I am not an expert but since SSD storage memory has certain a life span and may degrade over time a 128GB that is used as a 128GB will used all memory cells most of the time and those cell will degrade and might become inaccessible (died) around the same time.
    A 128 that it is used as 64GB will only use half of the memory cells and when one of those becomes inaccessible because of degradation it will use one of the new ones you had in reserved; from the 64GB you did not used.

  • 1 Hide
    rosen380 , May 2, 2011 7:12 PM
    As I understood it it was that each cell had a rough number of write cycles before failing. If that is the case, then by limiting to using half the cells, you'd expect them to fail twice as fast.
  • 1 Hide
    JOSHSKORN , May 2, 2011 7:27 PM
    I wonder if Apple tried to patent 'farting'.
  • 2 Hide
    hellwig , May 2, 2011 7:27 PM
    rosen380Likewise though, if it detects a failue, it could just move the data to an unused cell [or a cell marked for deletion]...

    The problem here is that the drive (and the OS attached to it) thinks it has 128GB of usable storage space. If, say, 8GB have gone bad, it would really only have 120GB of storage space. The drive would need some way of telling the OS that, hey, I have 8GB of bad sectors, don't try to write more than 120GB of data. There's no such mechanism, and instead, when the OS writes data and there are only bad sectors left, the drive can't do anything and that data is lost and the drive corrupted. Remember, flash memory re-writes whole sectors at a time, so you might lose the middle sector of a file, and better hope it's not your actual file system that's lost.

    This is the case REGARDLESS of how much memory is provisioned on the drive. If you have a 128GB drive, provision 64GB for backup, that leaves 64GB for data. IF 70GB of the sectors go bad, the drive is now only at 58GB of actual capacity, but still thinks it's at 64. The OS writes 58.1GB of data and boom, drive is essentially dead.

    Intel's patent doesn't fix that problem. What it does is it lets the customer decide how important data preservation is to their particular application. Average customers might only need 10% provision. Flash memory can sustain thousands of writes, and what home user is going to write 128GB thousands of times to a single drive? However, corporate scenarios (especially databases) might write out to a drive millions of times a day. In this case, provisioning a drive at 20%, 30%, 50% might be more cost effective than replacing the drive after only 10% has failed.
  • 1 Hide
    rosen380 , May 2, 2011 7:42 PM
    My experience with SCSI drives under IRIX is that when bad blocks are detected on a drive, they are flagged telling the OS not to write to them. Not sure if that is a feature of IRIX that allows for that, maybe a feature of SCSI drives, but it seems like if it is possible to work around bad blocks on a rotational drive, you should be able to implement something similar for SSDs.
  • 0 Hide
    JerseyFirefighter , May 2, 2011 7:50 PM
    eff this! where the waterproof 1 exobyte SSD patent that doubles as a neat coffee cup coaster!
  • 1 Hide
    rosen380 , May 2, 2011 8:00 PM
    @JerseyFirefighter - well, I don't want it unless it is powered wirelessly as well as wireless communication at gigabit or faster -- oh and it had better not cost more than $0.10 per petabyte!

    :) 
  • 1 Hide
    masterbinky , May 2, 2011 8:05 PM
    I'm sorry this is not patent worthy. Problem, Ways to handle memory cells failing in a fixed amount of cells. Solution, don't use all the memory cells so when one dies there is another to use. Give this to a 100 CS students and you would get this result from one of them. This is not an invention but just standard wear management. Taking a simple and standard idea, is not what patents were for (but now you can patent anything if you make the application long enough, and then sue anyone for that idea).
  • 1 Hide
    rosen380 , May 2, 2011 8:05 PM
    If my math is right you'd be able to store over 30 million uncompressed blu-ray movies in one exabyte... Perhaps one exabyte is a bit of overkill right now ;) 
Display more comments