OCZ Vertex 4 128 GB: Testing Write Performance With Firmware 1.4

Benchmark Results: Anvil’s Storage Utility

Anvil’s Storage Utility allows us to configure the size of our test file. We start off with a 1 GB file after the drive is secure-erased, returning it to a fresh-out-of-the-box state. We then refill the drive with data copied over from another repository. Specifically, we moved the Windows folder, the Program Files folders, and a number of folders composed of user data, leaving 50.4% free space.

Then, we ran a series of benchmarks from ASU, incrementally increasing the test file from 4 to 16 GB. We found that sequential write performance remained steady, until we subjected the Vertex 4 to our 16 GB test file. At that point, sequential write performance dropped to 205 MB/s. The thing was, though the file itself was 16 GB-large, overall write operations resulted in 32.1 GB of test files on the drive after the benchmark had finished. Consequently, the 16 GB test file benchmark resulted in less than 50% available free space by the time it finishes, and that appears to be the tipping point for when performance begins tailing off.

After secure-erasing the drive and filling it to 60%, we repeated the test with a 16 GB file and saw write speeds drop to 104.28 MB/s. The Vertex 4 128 GB appears to need free space for optimal performance.

This thread is closed for comments
49 comments
    Your comment
  • danielkr
    This is unfortunate. I purchased four of these drives and configured them in RAID 10. I wanted the read performance and the security of knowing I would not have to reinstall everything if a drive failed. I understood I would only have double write performance. But now that I have about 100GB of free space left, I am realizing only single drive write performance. Now I will have to rebuild into a RAID 0 and do regular image backups. :(
  • edlivian
    What is with these games these vendors are playing with firmware. Sandforce has a trick with compressible data, indelix controllers now expects you to have half your drive empty to get the performance boost?!?

    Why can't you just get the consistent performance like you do on samsung 830's ad crucial m4's, there is nothing wrong with consistency.
  • mayankleoboy1
    Thats too bad.:(
    i was almost on the point of buying a 128GB Vertex4.

    NOT NOW. will wait for the next 1.5 firmware.
    its strange that such type of behavior was documented on Toms only, while multiple other sites have already reviewed this drive with 1.4 fw, giving it a very good rating.

    +1 to Toms review team
  • kikiking
    so let me get this.. just like vertex III max iops and regular edition there is a performance drop? I sworn this drive had no garbage collection? either way I may buy one, and wait on 1.5. might as well or wait till I see 1.5 firmware.
  • g-unit1111
    Man I was really interested in seeing what Indilinx could do, and I've been recommending this drive on all high end builds. I was even thinking of replacing my Intel 320 with one. Guess I'll be sticking with the Crucial M4 and Plextor M3 from now on.
  • Todd Sauve
    According to OCZ this is the way the firmware for the Vertex4 128GB is designed to work and part of the reason is because of the way MS made the NTFS file system. They say the SSD will only slow down for a short time and then go back up to near normal speeds.

    They also tell me that Tom's Hardware is actually aware of this.

    Read about it here: http://www.ocztechnologyforum.com/forum/showthread.php?102254-Anormal-128GB-Vertex-4-Performance
  • waxdart
    danielkrThis is unfortunate.

    I read that RAID doesn't support TRIM (never checked beyond that) so I've not bothered with it. Have you done any tests with this?
  • edlivian
    Todd SauveAccording to OCZ this is the way the firmware for the Vertex4 128GB is designed to work and part of the reason is because of the way MS made the NTFS file system. They say the SSD will only slow down for a short time and then go back up to near normal speeds.


    I am sorry, but there should be never be a slow down, this is ssd, people expect top speed all the time from their drives.
  • you guys releaize that all ssds slow down when they're half full?
  • Kurz
    edlivianWhat is with these games these vendors are playing with firmware. Sandforce has a trick with compressible data, indelix controllers now expects you to have half your drive empty to get the performance boost?!?Why can't you just get the consistent performance like you do on samsung 830's ad crucial m4's, there is nothing wrong with consistency.


    Reading Comprehension Fail... Let say you have a 20 Gigabytes of Free Space (The SSD has 512GB total).

    If you try to write a file that is more than 10 GB you'll experience less than optinum performance.

    Note we are talking about Sequential Writing.
  • edlivian
    KurzReading Comprehension Fail... Let say you have a 20 Gigabytes of Free Space (The SSD has 512GB total).If you try to write a file that is more than 10 GB you'll experience less than optinum performance.Note we are talking about Sequential Writing.


    i understand very well, during the momentary transition from performance mode to storage mode there is a temporary slowdown. How come other vendors don't exhibit this issue, I dont get why ocz would create a problem for themselves, why have two storage modes? Make life simple, make one storage mode.
  • Kurz
    edliviani understand very well, during the momentary transition from performance mode to storage mode there is a temporary slowdown. How come other vendors don't exhibit this issue, I dont get why ocz would create a problem for themselves, why have two storage modes? Make life simple, make one storage mode.


    Its improbable you'll ever experience this slow down.
  • willard
    waxdartI read that RAID doesn't support TRIM (never checked beyond that) so I've not bothered with it. Have you done any tests with this?

    No testing needed, it simply doesn't work in the overwhelming majority of RAID controllers. Intel produced a beta version of their RST driver that enables RAID 0 support, and there's some really limited support in Linux via dmraid, but that's pretty much it in terms of RAID support and TRIM.
  • mousseng
    edliviani understand very well, during the momentary transition from performance mode to storage mode there is a temporary slowdown. How come other vendors don't exhibit this issue, I dont get why ocz would create a problem for themselves, why have two storage modes? Make life simple, make one storage mode.

    While OCZ won't fully explain the reasonings behind this (trade secrets and whatnot), it seems that the likely case is to actually improve upon the performance in low-capacity situations.

    As I'm sure you know, as a SSD fills up, it slows down. I'd imagine that the shift, then, is to rework the storage algorithms for the Vtx4 to improve speeds in this more-filled state. If you read the thread Todd Suave posted, you'll get a much better explanation from much smarter people than I.
  • ammaross
    Firstly, the article frequently uses comments like: "Our Iometer benchmark started with static data occupying 50.4% of the drive." Implying that you filled the drive to 50.4% used, HOWEVER, all your analyzing implies that you actually meant "50.4% free" and so should read "static data occupying 49.6% of the drive."

    Second, the dual storage modes is smart actually. Misleading, but smart. "Performance" mode is obviously MLC being treated as SLC and "Storage" mode is where it's once again treated as MLC. It's a known technique mentioned in research (by storage vendors) to make "cheap" unified product lines by using the same MLC chips in ALL their drives, rather than having two supplies for SLC vs MLC per line. Fast performance by not worrying about the extra bit per cell, but as soon as they have to worry about it, things slow down.
  • ammaross
    Todd SauveAccording to OCZ this is the way the firmware for the Vertex4 128GB is designed to work and part of the reason is because of the way MS made the NTFS file system. They say the SSD will only slow down for a short time and then go back up to near normal speeds.They also tell me that Tom's Hardware is actually aware of this.Read about it here: http://www.ocztechnologyforum.com/ [...] erformance

    HD Tune Pro doesn't use partitions/formatting at all, but raw writes to the drive, so hiding behind a "It's NTFS, and thus MS fault" does not work in this case.
  • mousseng
    444610 said:
    Second, the dual storage modes is smart actually. Misleading, but smart. "Performance" mode is obviously MLC being treated as SLC and "Storage" mode is where it's once again treated as MLC. It's a known technique mentioned in research (by storage vendors) to make "cheap" unified product lines by using the same MLC chips in ALL their drives, rather than having two supplies for SLC vs MLC per line. Fast performance by not worrying about the extra bit per cell, but as soon as they have to worry about it, things slow down.

    Hm. My understanding is that SLC NAND is more expensive, more reliable, and faster, yet is not capable of providing the level of storage MLC can. Does this mean that the transition from 'SLC' mode to MLC mode would be caused by 'SLC' mode running out of capacity? OCZ has stated that the performance-to-storage threshold is different between the 128GB model and 256GB model, which seems a bit conflicting with this theory; there's also the fact that the 512GB model has no performance-mode switch at all.

    OCZ has also stated that this performance dip only occurs when switching between performance and storage, and that the average user wouldn't notice it at all, as the speeds would go back up once it's fully transitioned into storage mode (although it won't be as fast as performance).

    Quote:
    HD Tune Pro doesn't use partitions/formatting at all, but raw writes to the drive, so hiding behind a "It's NTFS, and thus MS fault" does not work in this case.

    The reason that was brought up is exactly that - no user sits there writing in RAW mode. They're saying that it's designed to take advantage of the file system, not that the slows are caused by it.
  • ammaross
    moussengHm. My understanding is that SLC NAND is more expensive, more reliable, and faster, yet is not capable of providing the level of storage MLC can. Does this mean that the transition from 'SLC' mode to MLC mode would be caused by 'SLC' mode running out of capacity? OCZ has stated that the performance-to-storage threshold is different between the 128GB model and 256GB model, which seems a bit conflicting with this theory; there's also the fact that the 512GB model has no performance-mode switch at all.OCZ has also stated that this performance dip only occurs when switching between performance and storage, and that the average user wouldn't notice it at all, as the speeds would go back up once it's fully transitioned into storage mode (although it won't be as fast as performance).The reason that was brought up is exactly that - no user sits there writing in RAW mode. They're saying that it's designed to take advantage of the file system, not that the slows are caused by it.

    The 128GB drive likely doesn't saturate all I/O channels available to the controller (due to 1 64Gb die per channel likely, or even 2 dies per channel with only half the channels used). The 256GB+ sized drives may not have this dual-mode firmware enabled since they achieve 450+MB/s performance simply due to physical hardware. The performance/storage modes would be a trick used to achieve 256GB+ drive performance, but at 128GB-drive sizes. Also, treating MLC as SLC effectively halves the available space (1 bit per cell instead of 2 bits), thus at 50%, you hit that "out of space in performance mode" threshold, thus forcing the controller into "storage" mode (packing 2 bits per cell, thus having the read-alter-write cycle problems again).

    As for NTFS, no user normally writes in RAW mode, but doing so can simulate dumping a large AVI or somesuch to the drive. Likely they were hoping disk write caching of Windows would save them from micro-writes.
  • mousseng
    444610 said:
    Also, treating MLC as SLC effectively halves the available space (1 bit per cell instead of 2 bits), thus at 50%, you hit that "out of space in performance mode" threshold, thus forcing the controller into "storage" mode (packing 2 bits per cell, thus having the read-alter-write cycle problems again).

    This is why I brought up the 256GB model - I haven't seen any specific numbers for it, but it was made apparent that it employs a similar data/drive fill manager as the 128GB:
    Quote:
    The 256's also do this but to a much less degree and at a different fill value.

    While treating MLC as SLC in the 128GB model makes perfect sense, I don't quite see how it'd work out with the 256GB model if it doesn't swap modes at 50%.

    This is really interesting me; I know OCZ is being real protective of their technology behind the Vertex 4, but I would really like to know how this is working.
  • sewalk
    One more reason to pay a little more and get a Samsung.
  • ammaross
    Todd SauveIn reality, OCZ has said that Tom's knows about their special algorithms and basically how they are accelerating write speeds on the 128 gig Vertex 4. And that they have a special slow down mode at 50% full that will resolve itself after a few reboots and that no one is even likely to notice any change in performance.So why is Tom's even printing an article like this? To create a controversy where there really isn't one?

    I see what you're saying:
    Quote:
    Reviewers have not tested this as they need to ONLY test in file system mode, when you test in RAW the drive does not recover as well...but how many people do you know sit with drives in RAW mode benching them all day? So...you use the drive it will speed up once the algorithms have done their thing. The 256's also do this but to a much less degree and at a different fill value. The 512's do not need to do this...so they don't

    That's from the post chain he linked. Basically the "conversion" between performance to storage mode takes longer than the IOMeter, etc tests, so they don't see the performance go back up. Makes me wonder what the "conversion" is though. Still thinking SLC to MLC mode. Only thing besides compression I can think of to speed up IOPS on smaller drives and have a coincidental 50% tipping point.
  • Why_Me
    *edit
  • Todd Sauve
    This is taken from a post on the thread I referenced by a guy from Australia named canthearu. I think he has hit the nail on its head. (By the way, OCZ acknowledged all of this "controversy" two weeks ago, so Tom's is being a bit disingenuous by making the claim of just finding this out now. Does it really take two weeks to get an article into print?)

    **********************************************

    I think I understand now why the vertex 4 128gig on firmware 1.4 performs the way it does.

    The way OCZ have programmed the Vertex 4 128gig firmware is quite ingenious actually.

    For MLC NAND, as discussed in this paper (http://cseweb.ucsd.edu/users/swanson...11PowerCut.pdf), pages are actually programmed twice to store 2 bits per cell. The first time a page is programmed from erased is very fast, as it is going from a known state to a roughly central voltage (or staying unprogrammed). The second time a page is programmed is quite slow, because there is existing data that needs to be conserved, while the programmed state is adjusted by 1/4.

    So what the 128gig vertex 4 is doing while more then half the drive is free, is programming only the first layer of each page. This is the performance mode and performs more like SLC NAND rather then a normal MLC drive would, resulting in amazingly high write speeds on the 128gig drive (350meg per second or more). However, once you reach 50% full, the drive no longer can perform any first layer writes, and now must write the much slower second layer, which seems to take up to 4 times as long to do. This is the switch to storage mode is.

    During storage mode while the drive is more then half full, the drive will maintain 2 bits per cell, but when garbage collection is performed, it will attempt to free up NAND blocks so future writes can be performed only on the first layer, resulting in pretty good performance for burst writes.

    Other drives, along with the Vertex 4 128gig under firmware 1.3, must distribute first and second layer writes evenly, resulting in a more consistent, but quite a lot slower, performance over the full surface of the drive.

    The reason why the vertex 4 256, 512g don't really show this behavior is that it isn't needed for high performance ... the controller can interleave enough operations that, at least for the 512gig drive, it can distribute writes to the first and second layer evenly without a performance impact.

    Edit: however, this is just a guess from someone who doesn't have any inside knowledge, so I could be completely wrong.

    ************************************************

    I don't know about you guys, but to me this looked mighty ingenious on OCZ's part.

    Todd Sauve
  • serendipiti
    When I first read the article I got more confused than I was before... After reading the comments and a whole night of repairing sleep I think that I finally undesrstood...
    - The write speed improvement on Vertex 4 should be possible on Samsung and other SSDs (if Samsung wasn't doing the trick already).
    - The speed improvement comes to a price: using half the free capacity or suffering a GC process to consolidate from SLC to MLC once half the free size is filled. 90% of the time on 90% of scenarios, this shouldn't be an issue.