I do a lot of video editing with Premier Pro CC and I often have other apps such as Photoshop and an office suite running while I do it. Currently I'm all HD but I want to put SSDs in the mix for higher performance. The question is what will give me the best performance with the smallest hit on reliability?
On these forums I still see people saying that SSD reliability is read/write cycle limited. But that's based on older, smaller laboratory studies. In recent years there have been several large scale real-world studies in datacenters of Google, Microsoft, and Facebook that contradict this. An IEEE article examines these: https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8013175
Here's a summary of one study for lay readers (I've bolded notable items):
http://hexus.net/tech/news/storage/90920-google-datacentre-ssd-study-offers-surprising-conclusions/
... from the latter . . .
- SLC drives, which are targeted at the enterprise market and considered to be higher end, are not more reliable than the lower end MLC drives.
- Age, rather than usage amount correlates to higher error rates. So flash memory wearing out isn't really a problem with the SSD designs we have now.
- Between 20 and 63 per cent of drives experience at least one uncorrectable error during their first four years in the field.
- Between 30 and 80 per cent of drives develop at least one bad block and 2 to 7 per cent develop at least one bad chip during the first four years in the field.
- RBER (raw bit error rate), the standard metric for drive reliability, is not a good predictor of those failure modes that are the major concern in practice.
- RBER and the number of uncorrectable errors grow with PE cycles in a linear fashion.
- UBER (uncorrectable bit error rate), the standard metric to measure uncorrectable errors, is not very meaningful.
... so all of this seems to suggest that the "common wisdom" is upside down. Using the SSD's for stuff like page files and swap space may be less of a problem than people think, despite the old idea that they are read/write cycle limited. On the other hand using it for permanently installed files, like program files, may be a bigger risk than we thought if you want to avoid having to reinstall key software or OS files frequently.
How are people here digesting these new studies from Google, Facebook and Microsoft and how should I apply it to spec'ing a system for maximum performance AND maximum reliability? Thanks in advance!
On these forums I still see people saying that SSD reliability is read/write cycle limited. But that's based on older, smaller laboratory studies. In recent years there have been several large scale real-world studies in datacenters of Google, Microsoft, and Facebook that contradict this. An IEEE article examines these: https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8013175
Here's a summary of one study for lay readers (I've bolded notable items):
http://hexus.net/tech/news/storage/90920-google-datacentre-ssd-study-offers-surprising-conclusions/
... from the latter . . .
- SLC drives, which are targeted at the enterprise market and considered to be higher end, are not more reliable than the lower end MLC drives.
- Age, rather than usage amount correlates to higher error rates. So flash memory wearing out isn't really a problem with the SSD designs we have now.
- Between 20 and 63 per cent of drives experience at least one uncorrectable error during their first four years in the field.
- Between 30 and 80 per cent of drives develop at least one bad block and 2 to 7 per cent develop at least one bad chip during the first four years in the field.
- RBER (raw bit error rate), the standard metric for drive reliability, is not a good predictor of those failure modes that are the major concern in practice.
- RBER and the number of uncorrectable errors grow with PE cycles in a linear fashion.
- UBER (uncorrectable bit error rate), the standard metric to measure uncorrectable errors, is not very meaningful.
... so all of this seems to suggest that the "common wisdom" is upside down. Using the SSD's for stuff like page files and swap space may be less of a problem than people think, despite the old idea that they are read/write cycle limited. On the other hand using it for permanently installed files, like program files, may be a bigger risk than we thought if you want to avoid having to reinstall key software or OS files frequently.
How are people here digesting these new studies from Google, Facebook and Microsoft and how should I apply it to spec'ing a system for maximum performance AND maximum reliability? Thanks in advance!