Thinking about getting a some Western Digital 500GB Black hard drives, and put them in Raid 0.
I have looked around the web but seem to get conflicting information, some say it helps other say it only helps on hard drive speed tests.
I am looking for faster boot time and faster transfer speed between files, I do play same games, but I read that it doesn't really help on load times.
So what type of performance boot should I be expecting?
What is the best number of hardrives, I would buy, I am thinking about 3 drives.
I love the speed of a RAID 0 array. The thing is, SSD drives offer better performance for about the same price as two WD Black drives. Of course, the SSD is not going to offer the space as the two WDs. If you have the money, an SSD drive is the way to go.
You have to keep in mind that there are two different kinds of performance when it comes to disk drives:
Access time - the time it takes to find the data on the disk
Transfer rate - the speed at which the data can be read or written once it's been found.
Under the right conditions RAID 0 can improve the transfer rate, but it can't improve the access times. That means RAID 0 can help when copying or reading large files, but it won't be all that effective for tasks that need to access a lot of small files (such as booting the system and starting programs).
In RAID 0 (block-level striping without parity or mirroring) has no (or zero) redundancy. It provides improved performance and additional storage but no fault tolerance. Hence simple stripe sets are normally referred to as RAID 0. Any disk failure destroys the array, and the likelihood of failure increases with more disks in the array (at a minimum, catastrophic data loss is twice as likely compared to single drives without RAID). A single disk failure destroys the entire array because when data is written to a RAID 0 volume, the data is broken into fragments called blocks. The number of blocks is dictated by the stripe size, which is a configuration parameter of the array. The blocks are written to their respective disks simultaneously on the same sector. This allows smaller sections of the entire chunk of data to be read off the drive in parallel, increasing bandwidth. RAID 0 does not implement error checking, so any error is uncorrectable. More disks in the array means higher bandwidth, but greater risk of data loss.
Concur with what shadow and Sminlal.
A few cavets:
(1) as Siminlal stated, Raid0 does noting for access time and 4K random reads while it does provide a good boost for sequencial read/writes. NOTE: you can improve on access time and random read/write by short stroking the raid0 array, but this also decreases available space. For eample 2 500 Gig HDDs. When you create the raid0 volume you select 30->40% of avalable space, this would be 300 -> 400 Gigs out of the 1000 Gigs. You do nothing with the remaing space. On my WD blacks (640 Gig drives) this cut access time from about 12.6 mSec to approx 9.6 mSec. Better than standard raid0, but still nowheres close to SSDs!
(2) Yes it increases the probablity of a failure and yes if you have a failure as the Old saying goes, USNULUZ. But not as draconian as it sounds. One: Do backups, which you should anyways. I have used Raid0 for a considerable time - no failures. Currenly still using 4 HDDs (to pair of Raid0, one for vista, and one for XP. Been running since E6400 came out. ONE turn off delayed writes, Two DON'T Jar the HDD when it is ON.
In your case, just Raid0 two drives and use the 3rd drive as a backup.
Thanks for the information.
I wasn't planing on storing anything I don't want to loose on RAID 0. I have a server with 4 145GB drive in RAID 5 that backup to a external drive. I store everything important on that.
Hmm maybe I will just go with a SSD.
What I a good one to get? I am looking at 150-200$ range.
I have heard from some co-workers at my work that SSD's life expectancy isn't very good, and they burn them selfs out. That was half a year ago so they might have improved it since then. If it every was a problem to begin with.
SSDs can only sustain a certain number of write operations before they won't accept new data. The manufacturers mitigate this by doing "wear leveling" wherein the controller scatters writes across all possible flash memory cells so that no one cell wears out first.
My 160GB Intel X25-M G2 drive is rated for a lifespan of "at least" 5 years even if I write 20GB of data to it each and every day. Over the past 18 months I've averaged about 5GB of writes per day, which suggests that the drive should last for 20 years. I expect it to be obsolete long before then.
^+10 - He speakest NOT with a forked tonque, but the truth.
I would not have "blown" the money too buy 5 SSDs if I did not think they would last.
I recommend the 120 gig, (many say 64 -> 80 is OK). @ the $200 I'd recommend the slightly older generation - the SATA II SF-1200 controller ( such as patriot Phoenix pro, corsair, or vertex -2 ( vertex-2 seams to have some quality problems, hopefull they have been resolved) or the C300 SATA III if same price as SATA II SF1200 drives.
The SATA II's are yesterdays news and the New Sata III drives are faster.
Things to consider: (1) SATA II SSDs will blow the doors off of a HDD. (2) SATA III are even faster, but cost more, so instead of 120 gig, may need to drop down to 80 Gigs.
Many recommend the C300. They were the first out the door Sata III SSD and have been surpassed by newer SATA III drives, This is the reason that they are cheaper than the newer versions.
definitely worth it when are aware of the risks (as you are)
Drives are cheap and speed is good. I just installed an SSD myself (C300) and my samsung F3 raid beats it in all benchmarks except access time. (yeah I'm still trying to figure out that one myself and yes I think theres something wrong. The ssd should kill the raid in 4k randoms)