Sign in with
Sign up | Sign in

Thoughts, Reliability, First Results

Can The Flash-Based ioDrive Redefine Storage Performance?
By

We already mentioned the reliability statements from the manufacturer: Fusion-io promises 24 years reliability for the 80 GB entry-level model at a 40% duty cycle and 5 TB of writes or erases per day. However, if for some reason the drive activity is more intensive than expected, 5 TB throughput can actually be written within seconds. You can do two hours of full-speed write operations per day and still stay within Fusion-io's forecast, but always be careful. As with other storage devices, backup should be a top priority, and it isn’t difficult to perform considering the relatively small total capacities of the ioDrive.

Write Performance

Double decker: FusionIO installs two layers of flash chips to save board real estate.Double decker: FusionIO installs two layers of flash chips to save board real estate.

We tried all three performance settings: maximum capacity, improved write performance at 50% total capacity, and maximum write performance at 30% remaining capacity. We found that the improved write performance mode actually doubled write throughput in some benchmarks. However, maximum write performance did not introduce a lot of additional benefit. If you aren’t sure about the settings you should make some test runs using a development system comparable to a production machine, to find out.

I/O Versus Throughput

Flash memory performance depends a lot on the flash memory controller, which has to consider the characteristics of SLC and MLC flash memory. Smart devices optimize write operations by reducing the number of actual write operations, and they take care of so-called write amplification. As flash SSDs are also based on blocks of data, these blocks define the minimum amount of data that can be written to a flash SSD. Writing only 2 KB of data may trigger a 128 KB write, although it is not logically needed.

Most flash memory controllers adjust to the workload they have to take on, which means that performance may change drastically if you switch from intensive, small, random I/O operations to sequential reads or writes. We also looked at this, and found that there is an negative impact on throughput after I/O; Fusion-io manages to readjust performance quickly (although not instantly). At the same time we have to underscore that no one would buy an ioDrive for sequential storage operations: hard drives are much cheaper and even faster in that mode.

Display all 62 comments.
This thread is closed for comments
  • 3 Hide
    Tindytim , February 26, 2009 5:52 AM
    My question really has to be how this is going to effect the Future of SATA. Are we going to see a PCI-e based technology for the next generation of data storage technology? or are we just going to connect everything to a PCI-e slot?
  • 0 Hide
    danwat1234 , February 26, 2009 5:58 AM
    Seems that part of the logic would involve imitating a PCI-express IDE/SATA Controller so the BIOS can assign LBA stuff to it... But I don't know if that would confuse windows when it sees a 'Sata controller card' but it is actually this product... hmm.
  • 2 Hide
    danwat1234 , February 26, 2009 6:00 AM
    Above comment from me is referencing to how they would make the card bootable. Sigh. If only I could duck tape this to my new laptop. Well the Intel x25-m/e is good enough ;) 
  • 1 Hide
    cangelini , February 26, 2009 6:02 AM
    TindytimMy question really has to be how this is going to effect the Future of SATA. Are we going to see a PCI-e based technology for the next generation of data storage technology? or are we just going to connect everything to a PCI-e slot?


    Naturally, something like this is going to be very specialized. In mainstream applications, SATA is going to make the most sense. The PHY specification for SATA 6 Gb/s has already been ratified, so it's only a matter of time before the 3.0 standard starts making its way into controller cards and then chipsets. However, knowing what we know about magnetic storage and flash, you're really only going to see 6 Gb/s affect the throughput of SSDs moving forward.
  • 0 Hide
    ravenware , February 26, 2009 6:31 AM
    Wow, its even faster than the I-RAM drive.

    That's an expensive piece of hardware too, showing 3k for the 80GB version.

    Maybe they can take AMDs old slogan "Smash the hourglass".
  • 4 Hide
    erictaneda2 , February 26, 2009 6:41 AM
    Um... 5 TB per day = 5,000 GB per day = 5,000,000 MB per day. At 600MB per second write speed, this is 8,333 seconds, or over two hours of continuous writing at maximum speed.

    How does this mesh with "60 minute IOMeter benchmark run that focuses on write operations would result in wear equivalent to many weeks or months"?

    Either the author is misreading "5TB" as "5GB" or misquoting "5GB" saying "5TB" per day of writes.

    EricT
  • -5 Hide
    erictaneda2 , February 26, 2009 6:43 AM
    Um... 5 TB per day = 5,000 GB per day = 5,000,000 MB per day. At 600MB per second write speed, this is 8,333 seconds, or over two hours of continuous writing at maximum speed.

    How does this mesh with "60 minute IOMeter benchmark run that focuses on write operations would result in wear equivalent to many weeks or months"?

    Either the author is misreading "5TB" as "5GB" or misquoting "5GB" saying "5TB" per day of writes.

    EricT
  • 0 Hide
    JonnyDough , February 26, 2009 7:36 AM
    cangeliniNaturally, something like this is going to be very specialized. In mainstream applications, SATA is going to make the most sense. The PHY specification for SATA 6 Gb/s has already been ratified, so it's only a matter of time before the 3.0 standard starts making its way into controller cards and then chipsets. However, knowing what we know about magnetic storage and flash, you're really only going to see 6 Gb/s affect the throughput of SSDs moving forward.


    The truth is that I don't think the interface matters as long as it has no latency issues and provides the bandwidth required. Who cares if it's SATA or PCI - as long as you can boot from it, it's fast, and it's not too expensive it's a viable solution for desktop drives.
  • 0 Hide
    addiktion , February 26, 2009 7:45 AM
    You see those IO graphs? This thing is screaming for data. I think they may have a great product on their hands if they can wedge up against SSD
  • 0 Hide
    Anonymous , February 26, 2009 9:13 AM
    Wonder if a fast RAID card with three 30GB SDDs, configured in RAID to about 80GB, would perform equally? Anyone?
  • 0 Hide
    Turas , February 26, 2009 9:34 AM
    AzUr111Wonder if a fast RAID card with three 30GB SDDs, configured in RAID to about 80GB, would perform equally? Anyone?



    It would take a lot of drives to get their IOPS but in pure MB/s you could get there with 3 Intel drives.

    The Intel X25-E should really of been included. I am getting 240MB write/220MD reads along with 4800 IOPS per drive. These things are monsters and although are expensive, they are much better then then IO-Drive in the price area.
  • 1 Hide
    LuxZg , February 26, 2009 9:48 AM
    Wow... 2400$ and more.. we won't be using that anytime soon :D 
    But production price can't be that high.. it's the pricing for performance they give. So hopefully, we can expect that in out computers oh well... for about 10 years, lOL! :D 

    And I don't see Intel X25 being that much better on price.. If you do RAID, you need what.. 8-9 drives to get that much IOps? 8x500=4000$ so that's more expensive than this thing, we won't even go into the size and power consumption of 8 drives vs one half-height PCIe card.

    This thing looks like a monster to me, even though I'm not professonaly into heavy server stuff.. And for the performance they are offering, it's not that terrible price either. Especialy if you work with relatively small amount of data which is accessed by a large number of clients. If you can fit any database or something similar in those ~20GB (and that's pretty large database for most uses) you'll have a screaming server with this thing..

    Anyway, just blabbering here, this is good thing. And can't wait for it to drop down some 25x in price :) 
  • -4 Hide
    LuxZg , February 26, 2009 9:49 AM
    Wow... 2400$ and more.. we won't be using that anytime soon :D 
    But production price can't be that high.. it's the pricing for performance they give. So hopefully, we can expect that in out computers oh well... for about 10 years, lOL! :D 

    And I don't see Intel X25 being that much better on price.. If you do RAID, you need what.. 8-9 drives to get that much IOps? 8x500=4000$ so that's more expensive than this thing, we won't even go into the size and power consumption of 8 drives vs one half-height PCIe card.

    This thing looks like a monster to me, even though I'm not professonaly into heavy server stuff.. And for the performance they are offering, it's not that terrible price either. Especialy if you work with relatively small amount of data which is accessed by a large number of clients. If you can fit any database or something similar in those ~20GB (and that's pretty large database for most uses) you'll have a screaming server with this thing..

    Anyway, just blabbering here, this is good thing. And can't wait for it to drop down some 25x in price :) 
  • 0 Hide
    LuxZg , February 26, 2009 9:50 AM
    sorry for double posting, system said it wasn't posted the first time :/ 
  • 0 Hide
    dangerous_23 , February 26, 2009 10:09 AM
    what about doing a benchmark using a software ramdrive such as the one from qsoft? i am getting around 500MB/s throughput in hdtach on a 2GB partition of ram - i'd be interested to see how it compares
  • 0 Hide
    Turas , February 26, 2009 10:49 AM
    I thought I had read somewhere that the price had gone up closer to 5K for the small one. THat is why I referenced the Intel SLC drives as another option. Sure it may still not give quote the same IOPS bt you would get more space. I guess it boils down to price/MB or price/IOPS depending on the use.
  • 0 Hide
    kschoche , February 26, 2009 10:59 AM
    I think the real market for something like this is not at all in desktops, but much more likely as an intermediary cache step in storage filers between memory and scsi disks. That is really the only place that can get away with costs of this magnitude in $/GB.
  • 0 Hide
    clownbaby , February 26, 2009 11:17 AM
    I'll bet this would make a sweet scratch disk for photoshop. Kind of pricey, but if you send me one I'll tell you how much I like it:) 
  • 0 Hide
    climber , February 26, 2009 11:36 AM
    This kind of performace is possible with a software cache approach from www.superspeed.com, their SuperCache 3 and RAMDISK 9 Plus products deliver 1GB+ to the allocated memory to caching the disk at a block level cache or using the RAMDISK to store data.
  • -4 Hide
    barrychuck , February 26, 2009 12:11 PM
    This is not some new idea! A far cheaper solution is to use a hardware raid controller such as a Highpoint 3510/20 or and Adaptec 5405/8505 and 4 or more smaller Samsung SLC drives in raid 0. The total cost is about $1300 dollars from the EGG. I am running such a setup with 4 drives and easily hit the numbers this Fusion-IO is hitting and can boot from it,in Windows, OS X, Linux, and a bunch of other OS's. The Samsung SLC SSD's are also rebranded and sold as AData, Gskill, and OCZ. The 32Gb version selling as low as $239 each. While MLC flash varies in performance, SLC is pretty much the same and the controller is what matters.
Display more comments