I'm a senior software developer who, unlike most, has no real interest in hardware beyond how it affects getting my job done. Every year or so, I look for the "sweet spot" in all the high performance gear and build a new machine. I then mostly abandon the hardware market until I need to build another, or maybe one for the family. As such, I'm having a hard time deciding if the time is right for a major change to SSD rather then the Raptors I've been using (noisy hot things...).
So, I'm looking at a new build. Given the current state of SSDs (or perhaps waiting a month or two tops), what should I do?
I figure maybe a 60GB SSD for the OS is a no brainer. But given what I've ready, how to configure for best long term dependability. Would be nice to have it all there but "common wisdom" (yeah, right) seems to be that I should put volatile files on a fast HD. Kinda defeats the purpose other than initial boot and a few other cases. Others say this is resolved. Could someone help with the current wisdom summary, short of wading through many hours of reviews?
But more important than that, what about my "working data drive". Specifically my Source/Build drive. One of my main clients at the moment has some HUGE source trees with builds that take quite some time (several minutes). Make a change in a core module, and it often gets frustrating waiting for it to build, often only to load in a debugger, profiler, or test suite with still more I/O delays. If I can't put volatile OS files on SSS, then this surely seems like a bad application for the technology.
So, what I've read seems to say that current (fixed?) issue indicate I should just stick with another set of Raptors? Maybe in a RAID 0 just of the dev stuff since it's mostly backed by SC which mitigates the reliability issues. But another "friend" who is a self proclaimed Hardware Fanatic says those issues are resolved and that I should definitely get a ~60-120GB drive for my working set (and maybe a Virtual Machine or two).
What say you hardware gurus? Should I bit the bullet and go for a pair of OSZ SSDs for my OS and Working drives, with a couple of TB of nice 7200 RPM mechanical drives for longer term storage. I don't mind at all the cost if it will speed up my work, and therefore improve my perceived value (and response time) for my clients. I will continue to research on my own as time permits, just thought this was a faster route to an answer to my very narrow focus.
I think the jury is still out on reliability and probably will be until mainstream users can afford them.
From what I was able to figure from reading about them it would seem that a 120GB drive should last longer then a 60Gb one, due to the way information is distributed.
There's a piece on Tom's about them.> http://www.tomshardware.com/reviews/ssd-hdd-sata,2115-2...
While the jury is still out to lunch, I would stay - go for it.
(1) Use a Patriot Torqx 128 gig (since 08/09) in single drive laptop - No problems, came with a LIFETIME warranty.
(2) Installed a Intel G2 80 gig in desktop around Sept 09. No problems
(3) Have a 80 Gig Intel G1 in single drive Laptop. Use this Laptop to interface with fight critical Hardware. Gets alot of reads/writes/deletions. Also have a WD blue 128 gig HDD that I swap in/out with the 80 gig intel drive. Same operating system just different programs.
(4) Intalled a 8 gig PATA SSD (to replace 20 year old SCSI 2 gig) in Flight critical Computer. Used to download data from Satellite instrument currently under test.
(5) Just ordered a 120 gig G-Skill SSD. i would NOT invest this much if I did not felt that their reliablity was not equal to or better than the current crop of mechanical HDDs.
Recommend going for it. Might wait untill "Black Friday" for good sales. Also more Newer versions that support sata 6 should start to appear. That should drive down the price of the Higher end Sata 3 models. For day-to-day, you may not see a big diff between Sata 3 vs Sata 6 models (SATA 6 models will improve as they mature), NOT like the diff of goining from a HDD to a SSD.
In your case, I'd probably opt for 2 SSDs. One for the operating system + programs. Size should equal your current "C" drive + a min of 30%. a 60 gig SSD for a scratch disk - Your work files/data. As ALWAYS (true for standard HHD). BACKUP to a seperate Drive. I BU to a seperate Internal HDD ANDa USB/estata drive, as on the OFF chance that a PSU problem could take out your MB/SSDs/HDDs
(1) Use a Patriot Torqx 128 gig (since 08/09) in single drive laptop - No problems, came with a LIFETIME warranty.
I can pretty much guarantee you that this warranty will not include NAND wear because that is not component failure. Sure, if you blow a chip somehow they'll replace it, but if you just wear the drive out then they'll smile and hand you a product catalogue so you can purchase the latest whiz-bang device on the market.
@BadDog2: Your purchase depends on your needs though. A couple of Raptors in the right RAID configuration will, assuming the rest of the system can handle the I/O load, provide very high throughput for sequential reads and writes. Raptors will not help you with many small random reads and writes, or when many read/write requests are issued concurrently. HDDs are great for doing one thing really fast, while SSDs demolish HDDs in doing multiple things really fast because they do not have the limitation of a drive head that must constantly relocate itself around the platters.
You can be paranoid about every write that occurs to your SSD or you can use it for what you bought it for. Nobody seems to worry about writes to their HDDs and yet HDDs go out with a bang (not literally) while SSDs simply creep into a read-only state. Will heavy use wear out an SSD faster than light use? Sure it will. But it's going to hammer a HDD as well. The difference is that the HDD is cheaper to replace, but also more difficult to retrieve data from after it kicks the bucket.
Thanks guys. I hate to ask questions like this, but it's hard (it seems to me) for anyone who doesn't keep up to figure out how much relevance articles (many over a year old) have with modern hardware.
So it doesn't seem to have improved to the point my "friend" claims. Interactive software development takes a handful of largely static files (at least in later evolution) and repeatedly produces many MANY new intermediate, output, and debug (symbol etc) binaries. And it's the production of those binaries as well as (re)loading them for link/debug/profile/etc that soaks up time on a hard drive, and what I was hoping to improve.
RAID 0 (or similar) on Raptors is still not cheap, noisy, hot, and also produces a relatively high failure rate leaving you SOL for data. Failure and data loss is no huge deal due to source control and nightly auto as well as interim "as appropriate" backups (I often "snap-shot before making major changes for an easy way back when the last "step" isn't in the production tree - the reason for my several 7.2k TB backup drives).
So, if I understand the current state of things, this is basically the optimal path
1) OS and App static files (installation binaries, etc) on an OS SSD. Probably 60G or so.
1.b) Maybe make it 120 and put my main VMs there with differenced drives so change go on a secondary HDD and the static base drive image is on SSD?
1.c) Move all volatile files such at temp/scratch/page-file etc off the SSD and onto HDD.
2) Working (aka "scratch") drive for my currently active Source Tree(s) only. This is often relatively consistent (using the same basic tree or trees) to work on from for a week or more at a time, so not too much hassle moving trees on/off the working drive. Treat as a consumable, so maybe a less expensive (not cheap) 60GB here. Probably not actually consuming more than 20% of the drive, so it can float the memory allocations about for better averaging. Combine with a few helpful scripts to quickly swap out working trees as focus comes and goes (daily, weekly whatever)
3) Then keep using my several TB drives for backing store.
Sound about right? If it doesn't prove reliable enough over time, I can always go back to the Raptors...
If you want to make the most of the drives then putting the volatile intermediate files on the SSDs will be your best option. This will obviously wear the drives out faster, but as I said all drives will wear out faster the more you use them, it's only logical. Future drive may wear out slower, but they will still wear out because nothing lasts forever. Putting the OS and static data on the drives will make the system responsive but probably isn't going to benefit your build time much.
What is a bad idea is a mix of alot of static data and some volatile data. If the drive is full up with static data then the controller doesn't get much of a chance to evenly wear the NAND and any new data will be constantly written to the same places. To get the most even, but fastest overall wear you'd want the drive to contain only volatile files that are deleted regularly. To get the fastest wear of certain NAND areas you'd keep most of it filled with data that never gets deleted with a smaller amount left over for volatile files. To get the slowest wear you'd write static data and leave it, but you don't get much benefit from that
Everything is a tradeoff. But if you really want to see a benefit from the drive, you're going to want to use it for what it's good at: fast concurrent access. In most cases an SSD has already served up the data to the host and is getting the next lot before a HDD has even found it.
I've been using a 160GB Intel drive for the better part of a year now. I do light development of C++ and .Net code using Visual Studio 2008. I've found that VS2008 starts up a lot faster and that a few interactive operations are WAY faster with it installed on the SSD. The two standouts are bringing up the "Properties" pages for a project and doing lookups in the online version of MSDN.
I don't have huge projects and they build in a second or less whether they're on a hard drive or SSD, so I can't really give a definitive comment on build speed. After rebooting to clear the source files out of the file system caches the SSD build does seem significantly faster (maybe 1/2 second compared to almost a second), but with such a brief time I'm not sure how strong a conclusion I can draw.
I wouldn't worry all that much about "volatile files" being on the SSD. I deliberately configured my personal documents folder on the SSD so that all of the temporary files used by browsers, etc. would be there. My system is normally running for 8 hours a day or more. Monitoring the SSDs SMART parameters I've found that my system typically writes about 5GB/day to the SSD. Given that Intel claims that the SSD should last for "at least" 5 years at a write write of 20GB/day, this means I should expect a 20-year service life from the drive before write wear becomes a concern.
On further thought (and reading responses), a minor revision would be to move all volatile files (OS, App, Browser and build) to "scratch SSD" along with sources. As already stated, just consider it a "consumable", and never allocate more than about 50% so that balancing algorithms work.
So, one more question, what's the best value (yet still reliable/fast - including size?) drive right now? Or maybe more importantly, what are the "stay away" models?
Going to 50 % unused is probably overkill, 20 % should be plenty. Depends on how well you determine space used. The 50 % would allow for future growth . Most of the “Higher End” drives already provide for over provisioning. Infact some have cut back the amount of space for over provisioning from 16 Gigs to 8 Gigs (ballpark Nrs) with little impact on performance over time.
For best performance, If you have sata 6 (or plan on going to) then the Crucial RealSSD C300. Currently they are the only Sata 6 models – HOWEVER, others should follow shortly.
How much of a user seen performance difference between the bellow listed SSD as opposed to benchmarks will vary.
You would probably be happy with any of the above models - Look for sales, I just picked up the G.Skill Phoenix pro for $250 (less than the non pro model). My 4K random reads = 27.4 and writes = 62.9 (very close to the Benchmark in the link)
Stay away from:
Western digital Blue, Kingston value products and Dane-Elec (They advertized an Intel 80 gig – They forgot to mention it is the OLD G1). Make sure you check Reviews. The biggest diff between "stay away from" and a good one is in the 4K Random, The Lousy ones are ONLY 2 X better than a Mechanical HDD.
Further to the conversation I want to add something VERY important: You will save HOURS, DAYS, WEEKS, of your time by using SSDs. Is this worth $100? $500? Use the SSD. If you live long enough to wear it out that means you have SAVED a bunch of time for that life.
As RetiredChief says 50% is overkill. If you only use 60GB of a 120GB ssd you are under 50% because the drive has reserved a certain amount already. With TRIM enabled I would say 20% is much more than you require to keep the drive working in near peak condition.
Between SSDs and a pack of RAM there can be no better friend to a developer. Hmmmm cpu? No, I'll keep my RAM and SSD there too.
Best value? That's a really difficult question. For me, smaller is better, as I use it for my OS and do nothing exciting from my computer. However I have to +1 the chief's choices for this again as everything I read points to sandforce and especially the Crucial c300. However, intel m or e lines are still iops killers. Tough call...