Intel's New 520 Series SSD Codenamed "Cherryville"
"Cherryville", Intel's new high-end 520 Series SSD, is based on 2.5-inch SATA 6 Gb/s form-factor and comes in 60 GB, 120 GB, 160 GB, 240 GB, and 480 GB capacities (versus just 120 GB & 240 GB with the 510 Series). The drive will utilize 25 nm multi-level cell (MLC) NAND flash memory made by Intel and features support for TRIM, SMART, NCQ, and ACS-2 compliance. Intel looks to be setting up to battle the SandForce SF-22xx SSDs at each capacity level and price-point.
Initial performance information on "Cherryville" shows sequential and random access performance figures are expected to be up to 530 MB/s read, 490 MB/s write sequential performance; and 40,000 IOPS reads and 45,000 IOPS writes random performance. As with other SSD manufacturers, the performance information is expected to vary between capacities, so we'll have to take a wait-and-see approach on what the final release numbers are for each capacity. In addition, the SSD drives are rated with 1.2 million hours MTBF, can operate between 0 and 70°C, and withstand up to 2.7 G (RMS) vibration.
VR-Zone
Intel is expected to begin production of the 520 Series SSD in the fourth quarter of 2011.
Lately, I'm getting a little impatient, and I'm very tempted by the prices of the "new" Vertex Plus, does anyone know if the Vertex Plus are any good?
I'm waiting for Tom's or Anandtech to do a review on them.
I thought that was for HDD with spinning platter to read data then sequence them in logical order...
Intel is going to have to lighten the load and make these drives faster, other SSD's and similar storage are approaching one gig/second.
1200000 / 8 (hours a day) / 365 days = 410 years
Good luck with that.
Nobody is going to live that long to prove it.
Still nobody is going to live 136 years to tell.
Manufactures seem to make up big numbers that they know that they can't possibly prove.
Is called marketing.
If you do a lot a disk writes you can get the drive to fail, and then use the mean average of failure and the average user disk usage to get the MTBF.
They didn't had to wait 136 years to know the MTBF.
I guess the math is OK. That would only be 136 rewrites or so. But then again the start-up routine, constant updates to the registry, and to programs? What about the occasional virus that multiplies itself a million times on the drive? Who knows?
Of course, just 30 years in you will appear to be a retro dweeb who uses a camel to get to work, if you are still using this thing.
How about backing up those claims? I am willing to split the difference...how about a 205 year warranty?
Of course that method can be questioned. If things are made very close to identical you may get virtually no failures and then all of them start dieing like flies 5 years later.
If we were stacking straw on camel backs (I had to mention camels again) 30 lb will not break any camel's back but get near 600 lb(or whatever) and backs start snapping.
The distribution is not necessarily normal. It could have a very steep spike.
Or what if for instance there are abundant cosmic rays in a couple years. Cosmic rays can do a lot of damage to computers. Other background radiation can also do damage as can the quantum tunneling effects from heat. We don't really know that much about solar cycles yet and cosmic rays can be from distant sources too. Maybe a nearby supernova or other event may damage computers at a faster rate.
You're pretty close here, actually. The MTBF is indeed calculated by large sample sizes, but your misgivings about the techniques are unfounded. That's simply not the way these things work. Statistics doesn't just give you a number, it also tells you the likelihood that the actual number is significantly different from the one you got. Industry standard confidence requirement is 99.99966% sure. Is that 0.00034% chance that Intel was wrong about the MTBF really worth getting worked up over?
It's worth noting that failures are actually independent of how long the drive has been operating. Obviously mechanical drives experience wear, but it's not this wear that causes most failures. The vast majority are effectively random events that just happen out of the blue with no warning.
As far as the potential for a steep spike later in life, those would occur because of unknown defects in the product. You obviously can't account for something you don't know you need to account for, and any product is as likely as any other to have such a defect. If you throw out this MTBF because of such a possibility, you'd logically have to throw out every other one as well.
Really, everyone is getting way too worked up over MTBF. As a consumer, you shouldn't care about that number anyway. MTBF doesn't mean a thing unless you're looking at a large sample size. Unless you're running a data center, you do not have a large sample size.
And a minor nitpick, but we know a lot more about the sun's cycles than you seem to think. Astronomers have been studying it for close to 200 years now (15 full cycles), and we actually know a great deal about it. The myth that it's some mysterious thing operating on massive timescales is commonly perpetuated by global warming skeptics.