Sign in with
Sign up | Sign in
Your question

SSD reliability

Tags:
  • SSD
  • Boot
  • Storage
Last response: in Storage
Share
a b G Storage
July 27, 2011 5:58:08 PM

I'm considering using an SSD as a boot drive mostly for reliability. At least I think so, however nobody seems to ever mention SSD for any benefit other than speed. Am I wrong?

More about : ssd reliability

July 27, 2011 7:33:18 PM

HDDs with error free write cycles in millions are considered more reliable. Even the "consumer" hard drives. Ever heared about a database server, running on SSDs?
m
0
l
a b G Storage
July 27, 2011 8:26:50 PM

pepe2907 said:
HDDs with error free write cycles in millions are considered more reliable. Even the "consumer" hard drives. Ever heared about a database server, running on SSDs?



I'm not in a position to hear of data base servers running SSDs or not. Further, I suspect the equation for determining their choice would be different than mine.
m
0
l
Related resources
a b G Storage
July 27, 2011 8:32:54 PM

^Please, Ignore this man.
Servers and corporations use HDD's, because SSD's are more expensive to buy and at thier current price rate do not have a good ROI (Return on Investment), business is usually a few years behind technology than the consumer. An SSD farm is superior to an HDD farm, they run cooler, far more durable, use less power, and generally are considered to have a far longer lifespan than HDD's. This means less power required, less power for A/C cooling, and the thing is, if vibrations on an HDD are far more dangerous than on a SSD.

Also, lets not forget, SSD's have are faster and never need to be defragmented.

http://www.tomshardware.com/reviews/ssd-hdd-sata,2115-2...

http://www.storagesearch.com/bitmicro-art3.html

m
0
l
a c 314 G Storage
July 27, 2011 8:34:00 PM

ram1009 - Speed is usually the main reason given for switching from hard disk drives to solid state drives. Initially ssd's were considered to be more reliable than hdd's. However, it now appears that overall reliability is about equal. There are some exceptions.

pepe2907 - I am not sure how to interpret your statement. There are database servers running on ssd's, especially at very large financial institutions. Reduces time for database compilation by one half. A five hour compilation now only takes 2.5 hours. Normally gamers and enthusiasts would not be doing something like that. The only posts in this section that I remember were two about VMware experiments. Time to complete projects was cut in half.
m
0
l
a c 187 G Storage
July 27, 2011 8:41:54 PM

ram1009 said:
I'm considering using an SSD as a boot drive mostly for reliability. At least I think so, however nobody seems to ever mention SSD for any benefit other than speed. Am I wrong?


A ssd will also be quieter .. noiseless.

A ssd will also be more reliable than a hard drive because it has no moving parts:
http://www.intel.com/design/flash/nand/overview.htm

Here is some info on the return rate of different drives:
http://www.behardware.com/articles/810-6/components-ret...
m
0
l
July 27, 2011 9:16:59 PM

OK, there is like a HUGE mismach here. When they say you how reliable SSDs are, they /anybody they, but allways it's somebody somehow involved in the manufacturing or promotion of SSDs/ are talking about SLC SSD's which write cycle endurance per cell is considered as being about 100 000 cycles /these are the SSDs used in the large corporations servers too - and only these - but they are very expensive/. But on consumer market they sell you only the MLC SSDs, because they are much cheaper, and their endurance is considered as being between 1000 and 10 000 write cycles per cell. They use address remaping to protect the cells by spreading usage between areas etc. techniques to prolong the life of the drive, so by any means it will not dye after 1000 writes. But the data they advertise is for SLC SSDs while they sell you MLC SSDs.


ram1009 are you in a position to hear anything or just what you like to hear?
m
0
l
a b G Storage
July 27, 2011 10:09:44 PM

pepe2907 said:
OK, there is like a HUGE mismach here. When they say you how reliable SSDs are, they /anybody they, but allways it's somebody somehow involved in the manufacturing or promotion of SSDs/ are talking about SLC SSD's which write cycle endurance per cell is considered as being about 100 000 cycles /these are the SSDs used in the large corporations servers too - and only these - but they are very expensive/. But on consumer market they sell you only the MLC SSDs, because they are much cheaper, and their endurance is considered as being between 1000 and 10 000 write cycles per cell. They use address remaping to protect the cells by spreading usage between areas etc. techniques to prolong the life of the drive, so by any means it will not dye after 1000 writes. But the data they advertise is for SLC SSDs while they sell you MLC SSDs.


ram1009 are you in a position to hear anything or just what you like to hear?


You are correct, SLC drives do have higher endurance and last longer as well as being faster generally then MLC drives. However SLC drives are sold to the consumer market, they are just more expensive. MLC drives may not last as long as SLC drives, but they still generally last longer and ar far more reliable than HDD's.

Retailers that I've seen clearly state when they sell, what drives are SLC and MLC and clearly provide specifications for both drives, I don't know of any retailers that do what you say they do, please find me a single one.
m
0
l
a c 314 G Storage
July 27, 2011 10:18:42 PM

I agree with blackhawk1928 - For example, when I visit newegg.com I can choose whether I want to view SLC or MLC ssd's. No problem.
m
0
l
July 27, 2011 10:45:55 PM

I have several HDDs at 10 + years and they still work if I connect them. I am not using them for a long time but some are realy old like more as 15 years or so /the first PC I assembled fully by myself were in '92 i think and I am like a computer enthusiast for much longer/ and at their time they do their duty. They are small capacyty and old interface but still kicking /and no errors/ and untill I see an SSD surviving so long /actually it's teoretically imposible - the cells are dacaying over time, may just spare you waiting/ I'll not believe an SSD is more reliable.
On the other hand new and high density HDDs realy are being more crappy.
But more damages I saw over time were inflicted by some software error, even viruses and consequentive crashes or just attempts to "repair" :) , power supply problems etc, and realy rarely an native HDD problem, Maybe I may consider myself lucky. But my word is - SDDs also are not imune to these things.
m
0
l
a b G Storage
July 28, 2011 12:40:18 AM

pepe2907 said:
I have several HDDs at 10 + years and they still work if I connect them. I am not using them for a long time but some are realy old like more as 15 years or so /the first PC I assembled fully by myself were in '92 i think and I am like a computer enthusiast for much longer/ and at their time they do their duty. They are small capacyty and old interface but still kicking /and no errors/ and untill I see an SSD surviving so long /actually it's teoretically imposible - the cells are dacaying over time, may just spare you waiting/ I'll not believe an SSD is more reliable.
On the other hand new and high density HDDs realy are being more crappy.
But more damages I saw over time were inflicted by some software error, even viruses and consequentive crashes or just attempts to "repair" :) , power supply problems etc, and realy rarely an native HDD problem, Maybe I may consider myself lucky. But my word is - SDDs also are not imune to these things.


You are lucky, my longest lasting HDD was 3 years...but you must consider the fact that it was on 24/7. If I turned my computer on once a month and turned it off then it would probably last 20years. Also, just because your HDD's lasted a long time doesn't mean they last longer than SSD's. SSd's haven't been around long enough (at least the new ones with systems such as trim) to determine their lifespan, but all evidence suggest they will last longer than HDD's...even the MLC ones. Yes, the cells do decay over time, but metal also decays...instead of taking 100 years for the ssd to decay, the hDD will decay in a million years...does it matter however?...no. SSD's are immune to far more things than HDD's and are technology of the future.

-Much more durable, while running my OS, I can play catch with my SSD with no problems...can't say the same for a Hard-Drive
-I've been running an SSD since november of '09 and it runs 24/7/365 except for when I'm on vacation and never had an issue. Its got 1.75TB of writes and the media wearout indicator remains at 100.

I think you are just lucky. HDD's are mechanical and magnetic based...the mechanical part breaks easily and severly suffers from wear and tear. And you god-forbid get a magnet within the premises of an HDD, your data will be screwed up since the your data is on a magnetic platter instead of microchips.

I actually have no clue if this is proper way to calculate, but since I have an 80GB drive, assuming it only has 100writes per cell...even though its around 10,000 I think.
I would take the space (80GB) multiplied by the writes (100) which equals 8000GB. So I can write 8TB of data to my 80GB drive in this case before it theoretically fails. Since I have 1.75TB of writes already, lets round it to 2TB. I've had it for around 20months. So if it took 20months to do 2TB of writes. I take 8TBX20months=160/2=80. 80Months/12=6.7. So if my math is correct, my drive will last almost 7 years assuming its writes are just 100 per cell...take that 1000writes or even 10000writes like it should be. I don't know if this is correct way of detemining lifespan or write endurance however so don't attack my if I'm wrong :) .
m
0
l
July 28, 2011 2:51:07 AM

blackhawk1928 I am not switching off my computer /because I use swappless configuration it can't go to hibernation/ for days, even weeks, and I have currently 6 HDDs attached to the system 3 of which external /sometimes I may switched off some of them/, ranging ~250 /the one usualy sitting in my docking station and I use it to move data arround, sometime even do video directly on it, but rarely/ to ~1500 GB /Hitachi Maxtor,Samsung,Seagate and WD as branches/, and the oldest is definitely older as 5 years. And they definitely last longer as SSDs, because there is something in the technology of the flash memory, some electrical charge or so, I am not specialist here, but it makes the memory to decay just by sitting, so I may be not absolutely exact, but a SDD can't be able to last longer as 10 years - it's by it's nature.
I also happened to work in computer laboratories, including ones dedicated for student use etc. with tens of disks arround, with many computers on very slow upgrade cycle. And I never had a mechanically broken HDD :) . It may possibly happned in the labs, I can't be sure ofcourse.
The write cycles for MLD SSD s are considered being between 1000 and 10000 /no any way to be 100/ and I think 1000 may be considered a bad exception /100 000 for SLD/ but also there is special software, who moves the most frequently used logical addresses arround the phisical body, giving different addresses to different cells to prevent some cells to be accessed too frequently. I know this to work with the free space on the drive - meaning - a bigger drive with more frree space will last longer, but some safety mechanism surely works on the whole drive /so they appear to be some smart little things :) /. So it's not like 5000 cycles on a specific cell /what may burn it in just days if used for swap file, maybe even shorter/ but 5000 cycles multyplyed by the size of the free space on the drive /or maybe even the entire drive - this is firmware related thing/ and divided by the size of the chunk of data you read and write simultaneously - because a byte is never written alone but in "bursts" and it may happen to be able acces just a byte to rewrite the whole chunk /at least that's how it works on HDD and how it works on SDD is pretty much a trade secret yet, or I don't know exactly :) /... wow, I lost it :) 
But the thing is - it's kind of complicated to calculate it, it hugely depends on these safety mechanisms which are trade secrets.
m
0
l
a b G Storage
July 28, 2011 3:29:23 AM

^SSD's haven't been around long enough for such tests, and its controversial on how long it takes for them to decay.

Also, you wouldn't happen to be using Fibre Channel hard drives...those are businesses grade HDD's with a different design standard.

I've never seen an HDD last longer than around 3-4 years, not a SATA or IDE drive.
m
0
l
a c 415 G Storage
July 28, 2011 3:37:05 AM

There are really two different aspects to "reliability".

One is the likelihood that the SSD will fail in some unpredicted way. Because SSDs have no moving parts there's a huge potential for them to be far more reliable than mechanical hard drives. However real-life return rates for SSDs (as shown in the link in Geofelt's post above) seem to be similar to those for hard drives, so in practice that doesn't seem to be bearing fruit. Nonetheless, return rates for Intel SSDs (remembering that these rates are for previous generations of drives) are about 4-5X lower, suggesting that increased reliability really is possible.

A huge issue for this type of reliability is the firmware in the drive controllers, which seem to be involved in most of the reported problems. Unfortunately a lot of SSD vendors are anxious to get products out the door and it seems like firmware qualification and testing is being neglected. For this reason I'd recommend choosing older models which have a proven track record rather than going for the "latest and greatest". And avoid being "first on the block" when firmware upgrades are issued - wait and see if they cause problems for anyone else before taking the plunge. Remember - the pioneers are typically the ones who end up with the arrows in their backs.

The other aspect of reliability is write endurance - all SSDs will eventually stop accepting new data if they're written to often enough. This really isn't a reliability issue, it's a product lifetime issue. If you can find an SSD whose manufacturer makes a clear statement of write endurance (for example, Intel claimed its G2 drives would last "at least" 5 years at a write rate of 20GB/day) then you can predict the life expectancy of your drive, particularly when the drives report write usage via their SMART data. Unfortunately, write endurance is one of those specs that seems to be a little hard to come by. I'm sure this is because write endurance is going down with newer generations of drives as the density of flash memory continues to climb, and nobody likes to publish specs for products that are worse than last year's models.
m
0
l
July 28, 2011 3:51:22 AM

blackhawk1928 it has no any relevance there what is the external interface of the drive - is it SCSI, IDE, SATA or SAS, or whatever else they develope, it s just external for the disk interface, has nothing to go with the mechanics or the magnetic part or with reliability. I had plenty to work with server hardware and even with military hardware - it's actually nothing like a miracle, many men do this.
Although at home I don't have special hardware I have disks on 10+ years. You may not, but I do :) .
Intel by my observation keeps parts of the capacity of it's drives hidden in "reserve" - for emergency cases :) 
I's morning here and I passed another night before the screen and chatting too much, so - bye from me for the moment :) 
m
0
l
a b G Storage
July 28, 2011 3:59:45 AM

pepe2907 said:
blackhawk1928 it has no any relevance there what is the external interface of the drive - is it SCSI, IDE, SATA or SAS, or whatever else they develope, it s just external for the disk interface, has nothing to go with the mechanics or the magnetic part or with reliability. I had plenty to work with server hardware and even with military hardware - it's actually nothing like a miracle, many men do this.
Although at home I don't have special hardware I have disks on 10+ years. You may not, but I do :) .
Intel by my observation keeps parts of the capacity of it's drives hidden in "reserve" - for emergency cases :) 
I's morning here and I passed another night before the screen and chatting too much, so - bye from me for the moment :) 


I'm perfectly aware of that, but FibreChannel drives are tested far more thoroughly, its like ECC ram I think. Just like regular ram, but the exceptional pieces get selected for ECC. I think thats how it works, I could be wrong.
m
0
l
a c 415 G Storage
July 28, 2011 4:37:28 AM

pepe2907 said:
blackhawk1928 it has no any relevance there what is the external interface of the drive - is it SCSI, IDE, SATA or SAS, or whatever else they develope, it s just external for the disk interface, has nothing to go with the mechanics or the magnetic part or with reliability. I had plenty to work with server hardware and even with military hardware - it's actually nothing like a miracle, many men do this.
Blackhawk is right in that the hard drive manufacturers usually have "Enterprise" class drives which have, for example, more extensive ECC codes, different firmware and more robust mechanical design. While SATA drives often come in both "consumer" and "enterprise" versions, Fiber Channel drives are always "enterprise" class drives.

So while the interface isn't the reason why a drive might be more robust, it can be an indication of a more robust drive.
m
0
l
July 28, 2011 12:45:40 PM

sminal do you realy think I don't know there are enterprise class HDDs? Check what is a SAS interface drive and where they are used. It's a serial interface used exclusively for server class HDDs with up to 15000 rpm. Doesn't it sounds similar to Fibre Channel /and btw it's Fibre, not Fiber, it's not an optical network/?
Actualy it only supports my statement that HDDs are more reliable :) 
Good luck :) 
m
0
l
a c 314 G Storage
July 29, 2011 1:49:23 PM

Great Article about reliability. Manufacturers are not revealing everything they know about ssd's and some things are nothing more than advertising hype. Looks like it is business as usual.
m
0
l
a c 415 G Storage
July 29, 2011 4:49:52 PM

WyomingKnott said:
And here's the article on the subject: http://www.tomshardware.com/reviews/ssd-reliability-fai...
Very interesting, thanks for the link! The main points I got from the article are:

- SSDs as a general class don't seem to be particularly more or less reliable than hard drives as a general class

- Brand matters, but its almost impossible to find out reliability info by brand

- The current generation of SSDs is beset by firmware issues. To me this translates to: don't buy leading-edge stuff, wait for a product to prove itself first.
m
0
l
a c 314 G Storage
July 29, 2011 7:58:59 PM

sminlal - Right On! :) 
m
0
l
a b G Storage
July 29, 2011 9:31:12 PM

sminlal said:
The current generation of SSDs is beset by firmware issues. To me this translates to: don't buy leading-edge stuff, wait for a product to prove itself first.


While i know what you mean by "product" there.. I'm sure glad there are plenty of speed freaks and early adopters out there who are willing to dive in head first. Their "head injuries" help to push these products to new heights and I salute their testicular fortitude to take those chances.
m
0
l
a b G Storage
July 30, 2011 1:50:10 AM

groberts101 said:
While i know what you mean by "product" there.. I'm sure glad there are plenty of speed freaks and early adopters out there who are willing to dive in head first. Their "head injuries" help to push these products to new heights and I salute their testicular fortitude to take those chances.


Agree 1000000000000000%
m
0
l
!