Cloning SSD array

I asked this in another post but I think it got lost in all the other issues I had in there so I'm asking it separately now.

I'm running 2 Intel X25-M SSD's for my OS in RAID 0. Very happy with that so far. I thought to clone (I use Casper for cloning) that over to another internal drive (my case has plenty of room and I have an open SATA port) periodically for 2 reasons. First, of course, if the SSD RAID array fails for any reason, then I have my OS drive right at hand and can boot from it. Second, if the SSD's slow down and I need to "secure erase" it to get it back to factory. I can clone to the backup, boot from the backup, do the secure erase and then clone back to the SSD array.

I'm wondering if this is a practical way to keep the SSD's fresh since TRIM is not possible with the array.
9 answers Last reply
More about cloning array
  1. Make sure that the cloning application obeys the alignment that Windows 7 uses. Some cloning programs write the old 31.5KiB partition just like old XP does and ruiins both your RAID performance and that of the SSDs.
  2. Thanks, yes, Casper can handle this. I checked with them.

    So is your answer to my question "Yes, you can do that"? What I want to be sure of is that the process of cloning back to the SSDs won't just put them back in the same state they were in before and reverse the process of performing the secure erase.
  3. A secure erase + clone restore would be close to a fresh new disk regarding performance, so yes it will work.

    However, if you did not leave any space unused (unpartitioned space) you would instantly have 100% full flash cells; meaning very low performance.

    You have to reserve at least 20% to counteract that, anything from 20-50%; the more the higher the performance level of writes and the lower the write amplification.
  4. Okay so what's the drive doing with the unallocated space? How is that helpful. I'm not clear on this.
  5. Copy-paste from an earlier thread - sorry if this is too technical but you asked for it:

    SSDs treat all space as internal or dynamic until you write to it. The TRIM command would 'free' the block again so the SSD knows it can forget about the data stored there and use it for something else.

    So why do SSDs need free space? Well because normally if you write or change 4KiB to a file, the SSD would need to read 128KiB, erase 128KiB, calculate new ECC, write 128KiB+ECC. That is both slow and costs alot of write cycles (256KiB for writing just 4KiB).

    So the SSD utilizes a trick and writes to a free cell instead, and then internally remembers that it actually belonged to file X; to keep things simple (OS thinks in terms of sectors or 512-byte chunks, and use LBA to locate a file; the SSD doesnt know about a 'file' just about LBA). That is called 'remapping writes' and it requires only 4KiB of writes without going to the slow read-erase-program cycle.

    So really any place that isn't previously used, can be used by the SSD internally. If you write to that location, the SSD would lie and write elsewhere and remember this internally. So actually the OS has no knowledge where data is REALLY being stored on the SSD. Doing a Secure Erase erases just the mapping tables that remember where each LBA sector is actually stored.

    However, fun stops when free cells run out, and that will happen if you don't reserve space and don't have TRIM. Even if you delete stuff, the SSD has no knowledge of this and thinks all flash cells are in use. Actually it has about 8% of its storage 'hidden' from the OS/user because it really needs at least SOME storage space for itself. The more, the better however. Enterprise MLC drives coming in Q4 2010 will have a lot more space (up to 50%) reserved for internal use already, and don't need TRIM at all.
  6. If you're in the mood to experiment a bit --
    by way of doing a little "computer science" --
    I would also suggest doing 2 clones of your OS partition:

    (1) first, the way you were planning to do it;


    (2) defragment the RAID 0 partition before cloning it.

    You're going to do a "secure erase" on your SSDs anyway,
    so the degradation induced by such a defrag should not matter.

    Hypothesis here is that defragmenting logical sectors
    should put them into serial order, physically speaking --
    but AFTER the restore task (not before the restore task).

    Then, during restore, consecutive 512-byte sectors
    should map into consecutive physical sectors
    within sequential Nand Flash clusters.

    Just a thought :)

    p.s. Check out the CONTIG and PAGEDEFRAG freeware,
    which might also help with option (2) above:

  7. MRFS, are you saying that defragmenting an SSD would store the files sequentially/tidy on the SSD? Because that's not the case; all writes done by defragmenting will be 'random writes' and mapped to free cells until they run out. So all that defragmenting accomplises is actually fragmenting the SSD internally.

    Afterwards you can see all data being close to eachother - but that's not for real. It's how the OS thinks it is being stored. The SSD actually stored it very differently and the defragmenting now means all free cells are used up and the SSD is heavily fragmented internally and lost alot of its performance and lifespan.
  8. > are you saying that defragmenting an SSD would store the files sequentially/tidy on the SSD?

    No. I'm theorizing that the restore would store the files sequentially/tidy on the SSDs, not the defrag step, particularly after his "secure erase" step completes OK.

    Put differently, there would be a one-to-one mapping between logical sectors and their physical addresses within the SSD RAID 0 array.

  9. sub mesa:

    So as far as the SSD is concerned, the space that's unallocated by the OS is really usable. But because it's unallocated, Windows will only ever be allowed to make use of the % of the drive that was allocated. Is all that true?

    Also, then, if you leave 15GB free (which is what I did based on other threads you were involved in), then the amount of time you have until the drive gets slow is when I've filled up the part of the drive that I've allocated for use in addition to the SSD using up the remainder of that AND the unallocated space for internal use? So it seems like that could happen pretty quickly if you were to have say 120GB on the 133GB partition and did a lot of write/erase stuff. I see that TRIM would handle all this automatically in Windows 7 non-RAID situations. But excluding that, it seems like you would have to refresh these drives quite often.

    Intel/Microsoft or whoever is responsible has sure done us a huge disservice by neglecting those who want to set up their systems like this (and if they are just hoping [or pressuring] that fewer people will want to RAID their systems is not really an answer IMO). They haven't even fixed it so you can see SMART data through onboard RAID, so I guess we shouldn't expect better here.

    MRFS: Sounds logical. I had considered that before but it sounded risky.

    Anyway, if you clone a 500GB drive that has 100GB of data on it, to an identical 500GB drive (both spinning hard drive types - and assuming 1 partition of the full drive in each case), the cloning process has to be sure that the target drive is empty where the 100GB is not going so I assume it's writing 0's there. So cloning 70GB back to an SSD that has 130GB of free space would essentially "use up" the other 60GB as far as used NAND cells goes. But the unallocated space on the SSD has been freed by the secure erase procedure and isn't affected by the cloning process. But a factory new SSD or one that you secure erased and then just installed or copied things to (not cloned) would also have that extra 60GB free for its internal processes so that would be a faster drive than the one you cloned to, right?
Ask a new question

Read More

SSD NAS / RAID Storage