SSD reliability

ram1009

Distinguished
I'm considering using an SSD as a boot drive mostly for reliability. At least I think so, however nobody seems to ever mention SSD for any benefit other than speed. Am I wrong?
 

ram1009

Distinguished



I'm not in a position to hear of data base servers running SSDs or not. Further, I suspect the equation for determining their choice would be different than mine.
 

blackhawk1928

Distinguished
^Please, Ignore this man.
Servers and corporations use HDD's, because SSD's are more expensive to buy and at thier current price rate do not have a good ROI (Return on Investment), business is usually a few years behind technology than the consumer. An SSD farm is superior to an HDD farm, they run cooler, far more durable, use less power, and generally are considered to have a far longer lifespan than HDD's. This means less power required, less power for A/C cooling, and the thing is, if vibrations on an HDD are far more dangerous than on a SSD.

Also, lets not forget, SSD's have are faster and never need to be defragmented.

http://www.tomshardware.com/reviews/ssd-hdd-sata,2115-2.html

http://www.storagesearch.com/bitmicro-art3.html

 
ram1009 - Speed is usually the main reason given for switching from hard disk drives to solid state drives. Initially ssd's were considered to be more reliable than hdd's. However, it now appears that overall reliability is about equal. There are some exceptions.

pepe2907 - I am not sure how to interpret your statement. There are database servers running on ssd's, especially at very large financial institutions. Reduces time for database compilation by one half. A five hour compilation now only takes 2.5 hours. Normally gamers and enthusiasts would not be doing something like that. The only posts in this section that I remember were two about VMware experiments. Time to complete projects was cut in half.
 

pepe2907

Distinguished
Aug 24, 2010
643
0
19,010
OK, there is like a HUGE mismach here. When they say you how reliable SSDs are, they /anybody they, but allways it's somebody somehow involved in the manufacturing or promotion of SSDs/ are talking about SLC SSD's which write cycle endurance per cell is considered as being about 100 000 cycles /these are the SSDs used in the large corporations servers too - and only these - but they are very expensive/. But on consumer market they sell you only the MLC SSDs, because they are much cheaper, and their endurance is considered as being between 1000 and 10 000 write cycles per cell. They use address remaping to protect the cells by spreading usage between areas etc. techniques to prolong the life of the drive, so by any means it will not dye after 1000 writes. But the data they advertise is for SLC SSDs while they sell you MLC SSDs.


ram1009 are you in a position to hear anything or just what you like to hear?
 

blackhawk1928

Distinguished


You are correct, SLC drives do have higher endurance and last longer as well as being faster generally then MLC drives. However SLC drives are sold to the consumer market, they are just more expensive. MLC drives may not last as long as SLC drives, but they still generally last longer and ar far more reliable than HDD's.

Retailers that I've seen clearly state when they sell, what drives are SLC and MLC and clearly provide specifications for both drives, I don't know of any retailers that do what you say they do, please find me a single one.
 

pepe2907

Distinguished
Aug 24, 2010
643
0
19,010
I have several HDDs at 10 + years and they still work if I connect them. I am not using them for a long time but some are realy old like more as 15 years or so /the first PC I assembled fully by myself were in '92 i think and I am like a computer enthusiast for much longer/ and at their time they do their duty. They are small capacyty and old interface but still kicking /and no errors/ and untill I see an SSD surviving so long /actually it's teoretically imposible - the cells are dacaying over time, may just spare you waiting/ I'll not believe an SSD is more reliable.
On the other hand new and high density HDDs realy are being more crappy.
But more damages I saw over time were inflicted by some software error, even viruses and consequentive crashes or just attempts to "repair" :), power supply problems etc, and realy rarely an native HDD problem, Maybe I may consider myself lucky. But my word is - SDDs also are not imune to these things.
 

blackhawk1928

Distinguished


You are lucky, my longest lasting HDD was 3 years...but you must consider the fact that it was on 24/7. If I turned my computer on once a month and turned it off then it would probably last 20years. Also, just because your HDD's lasted a long time doesn't mean they last longer than SSD's. SSd's haven't been around long enough (at least the new ones with systems such as trim) to determine their lifespan, but all evidence suggest they will last longer than HDD's...even the MLC ones. Yes, the cells do decay over time, but metal also decays...instead of taking 100 years for the ssd to decay, the hDD will decay in a million years...does it matter however?...no. SSD's are immune to far more things than HDD's and are technology of the future.

-Much more durable, while running my OS, I can play catch with my SSD with no problems...can't say the same for a Hard-Drive
-I've been running an SSD since november of '09 and it runs 24/7/365 except for when I'm on vacation and never had an issue. Its got 1.75TB of writes and the media wearout indicator remains at 100.

I think you are just lucky. HDD's are mechanical and magnetic based...the mechanical part breaks easily and severly suffers from wear and tear. And you god-forbid get a magnet within the premises of an HDD, your data will be screwed up since the your data is on a magnetic platter instead of microchips.

I actually have no clue if this is proper way to calculate, but since I have an 80GB drive, assuming it only has 100writes per cell...even though its around 10,000 I think.
I would take the space (80GB) multiplied by the writes (100) which equals 8000GB. So I can write 8TB of data to my 80GB drive in this case before it theoretically fails. Since I have 1.75TB of writes already, lets round it to 2TB. I've had it for around 20months. So if it took 20months to do 2TB of writes. I take 8TBX20months=160/2=80. 80Months/12=6.7. So if my math is correct, my drive will last almost 7 years assuming its writes are just 100 per cell...take that 1000writes or even 10000writes like it should be. I don't know if this is correct way of detemining lifespan or write endurance however so don't attack my if I'm wrong :).
 

pepe2907

Distinguished
Aug 24, 2010
643
0
19,010
blackhawk1928 I am not switching off my computer /because I use swappless configuration it can't go to hibernation/ for days, even weeks, and I have currently 6 HDDs attached to the system 3 of which external /sometimes I may switched off some of them/, ranging ~250 /the one usualy sitting in my docking station and I use it to move data arround, sometime even do video directly on it, but rarely/ to ~1500 GB /Hitachi Maxtor,Samsung,Seagate and WD as branches/, and the oldest is definitely older as 5 years. And they definitely last longer as SSDs, because there is something in the technology of the flash memory, some electrical charge or so, I am not specialist here, but it makes the memory to decay just by sitting, so I may be not absolutely exact, but a SDD can't be able to last longer as 10 years - it's by it's nature.
I also happened to work in computer laboratories, including ones dedicated for student use etc. with tens of disks arround, with many computers on very slow upgrade cycle. And I never had a mechanically broken HDD :). It may possibly happned in the labs, I can't be sure ofcourse.
The write cycles for MLD SSD s are considered being between 1000 and 10000 /no any way to be 100/ and I think 1000 may be considered a bad exception /100 000 for SLD/ but also there is special software, who moves the most frequently used logical addresses arround the phisical body, giving different addresses to different cells to prevent some cells to be accessed too frequently. I know this to work with the free space on the drive - meaning - a bigger drive with more frree space will last longer, but some safety mechanism surely works on the whole drive /so they appear to be some smart little things :)/. So it's not like 5000 cycles on a specific cell /what may burn it in just days if used for swap file, maybe even shorter/ but 5000 cycles multyplyed by the size of the free space on the drive /or maybe even the entire drive - this is firmware related thing/ and divided by the size of the chunk of data you read and write simultaneously - because a byte is never written alone but in "bursts" and it may happen to be able acces just a byte to rewrite the whole chunk /at least that's how it works on HDD and how it works on SDD is pretty much a trade secret yet, or I don't know exactly :)/... wow, I lost it :)
But the thing is - it's kind of complicated to calculate it, it hugely depends on these safety mechanisms which are trade secrets.
 

blackhawk1928

Distinguished
^SSD's haven't been around long enough for such tests, and its controversial on how long it takes for them to decay.

Also, you wouldn't happen to be using Fibre Channel hard drives...those are businesses grade HDD's with a different design standard.

I've never seen an HDD last longer than around 3-4 years, not a SATA or IDE drive.
 
There are really two different aspects to "reliability".

One is the likelihood that the SSD will fail in some unpredicted way. Because SSDs have no moving parts there's a huge potential for them to be far more reliable than mechanical hard drives. However real-life return rates for SSDs (as shown in the link in Geofelt's post above) seem to be similar to those for hard drives, so in practice that doesn't seem to be bearing fruit. Nonetheless, return rates for Intel SSDs (remembering that these rates are for previous generations of drives) are about 4-5X lower, suggesting that increased reliability really is possible.

A huge issue for this type of reliability is the firmware in the drive controllers, which seem to be involved in most of the reported problems. Unfortunately a lot of SSD vendors are anxious to get products out the door and it seems like firmware qualification and testing is being neglected. For this reason I'd recommend choosing older models which have a proven track record rather than going for the "latest and greatest". And avoid being "first on the block" when firmware upgrades are issued - wait and see if they cause problems for anyone else before taking the plunge. Remember - the pioneers are typically the ones who end up with the arrows in their backs.

The other aspect of reliability is write endurance - all SSDs will eventually stop accepting new data if they're written to often enough. This really isn't a reliability issue, it's a product lifetime issue. If you can find an SSD whose manufacturer makes a clear statement of write endurance (for example, Intel claimed its G2 drives would last "at least" 5 years at a write rate of 20GB/day) then you can predict the life expectancy of your drive, particularly when the drives report write usage via their SMART data. Unfortunately, write endurance is one of those specs that seems to be a little hard to come by. I'm sure this is because write endurance is going down with newer generations of drives as the density of flash memory continues to climb, and nobody likes to publish specs for products that are worse than last year's models.
 

pepe2907

Distinguished
Aug 24, 2010
643
0
19,010
blackhawk1928 it has no any relevance there what is the external interface of the drive - is it SCSI, IDE, SATA or SAS, or whatever else they develope, it s just external for the disk interface, has nothing to go with the mechanics or the magnetic part or with reliability. I had plenty to work with server hardware and even with military hardware - it's actually nothing like a miracle, many men do this.
Although at home I don't have special hardware I have disks on 10+ years. You may not, but I do :).
Intel by my observation keeps parts of the capacity of it's drives hidden in "reserve" - for emergency cases :)
I's morning here and I passed another night before the screen and chatting too much, so - bye from me for the moment :)
 

blackhawk1928

Distinguished


I'm perfectly aware of that, but FibreChannel drives are tested far more thoroughly, its like ECC ram I think. Just like regular ram, but the exceptional pieces get selected for ECC. I think thats how it works, I could be wrong.
 
Blackhawk is right in that the hard drive manufacturers usually have "Enterprise" class drives which have, for example, more extensive ECC codes, different firmware and more robust mechanical design. While SATA drives often come in both "consumer" and "enterprise" versions, Fiber Channel drives are always "enterprise" class drives.

So while the interface isn't the reason why a drive might be more robust, it can be an indication of a more robust drive.
 

pepe2907

Distinguished
Aug 24, 2010
643
0
19,010
sminal do you realy think I don't know there are enterprise class HDDs? Check what is a SAS interface drive and where they are used. It's a serial interface used exclusively for server class HDDs with up to 15000 rpm. Doesn't it sounds similar to Fibre Channel /and btw it's Fibre, not Fiber, it's not an optical network/?
Actualy it only supports my statement that HDDs are more reliable :)
Good luck :)
 
Very interesting, thanks for the link! The main points I got from the article are:

- SSDs as a general class don't seem to be particularly more or less reliable than hard drives as a general class

- Brand matters, but its almost impossible to find out reliability info by brand

- The current generation of SSDs is beset by firmware issues. To me this translates to: don't buy leading-edge stuff, wait for a product to prove itself first.
 

groberts101

Distinguished
Feb 20, 2010
363
0
18,810


While i know what you mean by "product" there.. I'm sure glad there are plenty of speed freaks and early adopters out there who are willing to dive in head first. Their "head injuries" help to push these products to new heights and I salute their testicular fortitude to take those chances.
 

blackhawk1928

Distinguished


Agree 1000000000000000%