128 KB Sequential Write

Unlike sequential reads, sequential writes tend not to scale according to queue depth. Writes are actually more dependent on the number of dies a given task can be spread across. For example, higher density means that the 120 GB M500 sports just eight dies, whereas the 128 GB m4 had 16. That explains why the two lower-capacity M500s trail the previous generation at their respective capacities.
We have a hard time making a big issue out of this, particularly when the larger models are so much quicker than their predecessors. The quickest m4 topped out near the 265 MB/s mark, while the larger M500s exceed 400 MB/s. The difference isn't earth-shattering, but it's difficult to imagine a desktop application where higher sequential writes might change the experience drastically.

We add in data from elsewhere in the consumer SSD space, and Crucial's larger M500s are up there in the action. It'd be an exaggeration to say that the Extreme II, 840 Pro, and OCZ Vector pistol-whip the 480 and 960 GB Crucial drives. However, a 100 MB/s lead in favor of OCZ's Vector does seem a little brutal. Now's probably a good time to mention that there are SSDs framed as mainstream and others marketed as performance-oriented. The M500 falls into the former category, while the Vector, 840 Pro, and Extreme II are categorically high-end drives where speed is concerned.
If we chart out the maximum observed 128 KB sequential write performance from Iometer, it becomes apparent that the 120 GB M500 is not stellar when it comes to writes. Neither is Samsung's 120 GB 840. Achieving half (or less) of the performance posted by two-bit-per-cell-based SSDs sporting similar capacity, the Crucial and Samsung models are particularly hobbled. One is hamstrung by TLC NAND, while the M500 is hurt by a move to higher-density 128 Gb flash.
The M500 and 840 butt heads again at 240 GB, where Crucial's margin of victory is just 8 MB/s. Samsung's 840 EVO is another matter entirely. Once its Turbo Write buffer runs out, though, it's back down to the regular 840's performance level.
- Crucial's New m4 (Plus 496) Gets Reviewed
- Inside Of Crucial's M500 SSD
- Test Setup And Benchmarks
- Results: 128 KB Sequential Reads
- Results: 128 KB Sequential Write
- Results: 4 KB Random Reads
- Results: 4 KB Random Writes
- Results: Tom's Storage Bench v1.0
- Results: Tom's Storage Bench, Continued
- Results: PCMark 7 And PCMark Vantage
- Results: File Copy
- Results: Power Consumption
- Head To Head: Crucial's M500s Vs. Samsung's 840 EVOs
- We Like The High-Capacity Crucial M500 SSDs Best...

The SSD 840 is rated for 1000 P/E cycles, though it's been seen doing more like ~3000. At 10GB/day, a 240GB would last for 24,000 days, or about 766 years, and that's using the 1K figure.
You're free to waste money if you want, but SLC now has little place outside write-heavy DB storage.
EDIT: Screwed up by an order of magnitude.
You are totally correct! You win a gold star, because I didn't even notice. Thanks for catching it, and it should be fixed now.
Regards,
Christopher Ryan
Not only are consumer workloads completely gentle on SSDs, but modern controllers are super awesome at expanding NAND longevity. I was able to burn through 3000+ PE cycles on the Samsung 840 last year, and it only is rated at 1,000 PE cycles or so. You'd have to put almost 1 TB a day on a 120 GB Samsung 840 TLC to kill it in a year, assuming it didn't die from something else first.
Regards,
Christopher Ryan
You may be thinking of the controller failures some of the Sandforce drives had, which are completely unrelated to the type of NAND used.
I would like to see, can TH use SSD put this 10GB/day and see for how long it will work.
After this I read this article, I think that Crucial's M500 hit the jackpot. Will see Samsung's response. And that's very good for end consumer.
Show me a report with a reasonable sample size (more than a couple of dozen drives) that says they have >50% annual failures.
A couple of years ago Tom's posted this: http://www.tomshardware.com/reviews/ssd-reliability-failure-rate,2923.html
The majority of failures were firmware-caused by early Sandforce drives. That's gone now.
EDIT: Missed your post. First off, that's a perfect example of self-selection. Secondly, those who buy multiple SSDs will appear to have n times the actual failure rate, because if any fail they all appear to fail. Thirdly, that has nothing to do with whether or not it is a 1bpc or 3 bpc SSD - that's what you started off with.
Sounds a bit like a sore loser argument, unfortunately.
SSDs aren't perfect, but they generally do live long enough to not be a problem. Most of the failures have been overcome by now too.
Just realised there's an error in my original post - off by a factor of ten. Should have been 66 years.
Also, SLC based SSD-s are usually "enterprise", so they are designed for reliability and not performance, and they don't use some bollocks, overclocked to the point of failure, controllers. And have better optimised firmware...
Tell that to all the people on this forum still running intel X-25M that launched all the way back in 2008 and my Samsung 830 that's been working just fine for over a year.......
See what you're paying attention too is the loudest group of ssd owners. The owners that have failed ssd's.
See it's the classic "if someone has a problem, there going to be the one that you hear and the quiet group, isn't having the problem" issue.
Those that dont have issues (such as myself) dont mention about our ssds and is probably complaining about something else that has failed.
Those don't seem like opinions to me. It's customary to include some form of leeway in the sentence, like 'IMHO', 'often', or 'I've heard' etc.
Assuming the system has more than enough RAM to avoid needing any significant amount of swapping. If someone with 4GB RAM uses a 16GB swapfile to avoid upgrading to the 8-16GB RAM he really should have, he could end up writing over 1GB/minute.
I have ended up over-crowding my RAM many times in the past and it has a tendency to make my computers practically unusable when using mechanical HDDs at which point I had to spread my programs and swapfile across multiple HDDs to reduce the IO load on individual drives. I imagine this would burn through SSDs fairly quickly.
Also, SLC based SSD-s are usually "enterprise", so they are designed for reliability and not performance, and they don't use some bollocks, overclocked to the point of failure, controllers. And have better optimised firmware...
Anecdotal evidence is pretty useless. People with very good or very bad experiences tend to write reviews. People generally don't write reviews for random pieces of hardware that just work as expected. Provide citations with statistics to support your statements if you want anyone to take them seriously.
I will not be replying to this topic any more. All I wanted to do is say my opinion, but there had to be some smartass telling me that I don't have the right to do it. Noooo, I have to source it. This is my oppinion, get over it.
You are entitled to your opinion but you are making bold statements without any facts. A lot of people use forums like these to research products they are thinking about buying and you are spreading misinformation about SSD's without and evidence for your statements. I personally have 4 computers with SSD's in them that are over 2 years old and I haven't had a single issue or failure. It really pisses me off when people start spreading inaccurate statements that may turn away a potential user of a SSD. Out of all the PC upgrades I have done in the past 12 years the SSD has been the best most noticeable improvement I have done.
Just to stick an oar into the reliability issue, my Samsung 830 has run reliably for over two years now. The only SSDs I had fail in use were a couple of Sandforce drives, but their replacements have thus far been reliable. I think InvalidError has a good point about RAM though; I tend to use at least 8GB, which probably cuts down swapping quite a bit. I also prefer to close programs completely rather than have a lot of windows open, which would also reduce swapping. With a SSD, they re-open pretty quickly anyway.
I will not be replying to this topic any more. All I wanted to do is say my opinion, but there had to be some smartass telling me that I don't have the right to do it. Noooo, I have to source it. This is my oppinion, get over it.
LOL, when you come onto an article about SSD's and say nonsense like you did, you have to expect to get hate buddy. MLC/TLC is very viable and lasts much longer than a year kid, some drives do fail, and if they do you get your replacement. Other peoples drives are not lemons and last a normal lifetime, common sense bro.
Oh, I highly recommend getting the drive migration kit from Crucial for ~$20. It makes like much easier. It's a USB 3.0 SATA drive connector for your M500. Then just boot the supplied CD and run it in auto mode to copy over your drive exactly and make any sizing adjustments needed to the partitions to get it to fit. It works very nicely.