Sign in with
Sign up | Sign in
Your question
Closed

RAID 0 + SSD = super fast hard drive?

Last response: in Storage
Share
July 30, 2009 12:42:57 AM

So if I made a RAID 0 array using SSDs, would I have a super fast hard drive?

Would it be worth the money? I'm planning on backing up to my NAS, so I'm not too worried about data loss, although I hear that SSDs are pretty long lasting.
July 30, 2009 1:30:51 AM

Sure, it would be fast. SSDs are pretty darn quick themselves though, you may find one is all you need.

If it's worth the money is entirely up to your situation.
July 30, 2009 1:43:27 AM

if using raid for SSD's get a very good raid controller (adaptec comes to mind) on a pci express 4x slot minimum
Related resources
a c 415 G Storage
July 30, 2009 6:22:09 AM

Here's a an interesting question... the TRIM command is going to be important for SSDs running on Windows 7. But it's a brand new command that still isn't implemented in some SSDs.

I assume that RAID controller firmware is also going to have to be updated to pass a TRIM command issued by the OS through to the appropriate member drives. How can we tell if a particular RAID controller will actually do this?
July 30, 2009 6:29:28 AM

sminlal said:
Here's a an interesting question... the TRIM command is going to be important for SSDs running on Windows 7. But it's a brand new command that still isn't implemented in some SSDs.

I assume that RAID controller firmware is also going to have to be updated to pass a TRIM command issued by the OS through to the appropriate member drives. How can we tell if a particular RAID controller will actually do this?
until we see it, i dont think TRIM matters, by the time it is important, ssd will be a staple. besides im sure we can always flash our old ssd disks if we still even want to bother with them.

or we may look back on TRIM and laugh, lol blast processing.
July 30, 2009 6:31:03 AM

spambi said:
So if I made a RAID 0 array using SSDs, would I have a super fast hard drive?

Would it be worth the money? I'm planning on backing up to my NAS, so I'm not too worried about data loss, although I hear that SSDs are pretty long lasting.

nope it sure wouldnt be worth it, maybe next year, but if your curious, check out the new kingston v-series drives.
a c 127 G Storage
July 30, 2009 11:47:02 AM

Its better to buy an SSD with a good controller (like the Intel X25-M) than cheaper SSDs and put them in RAID0. Since SSDs use interleaving or "RAID0" internally already, buying a better SSD may give you more performance benefit than when using RAID0. Especially hardware RAID will not scale well with SSDs. Areca is bound to 70.000 IOps while a single SSD can already do 35.000 IOps. So you'll hit a bottleneck pretty soon, unless you pick Software RAID0.
a c 127 G Storage
July 30, 2009 11:49:00 AM

The kingston is using JMicron JMF-602 variation by the way, so don't buy that drive.

Quote:
Basically, the JMicron JMF602 controller has been laser etched with the Toshiba name as Kingston didn't want consumers to see the JMicron name and think this drive would stutter.


http://www.legitreviews.com/article/1005/2/
July 30, 2009 12:15:38 PM

omnomnom, crappy cheap SSDs :) 
July 30, 2009 12:40:34 PM

Eh, I'll toss in my overly long 2cents...

Current SSDs have been proven optimal for specific I/O patterns, generally small random reads. Additionally, as pointed out - apps and OS will queue a write then read-request - which was created for the typical limitation of seek time of hard drives that SSDs do not have; hence the SSD queue gets filled with odd read/write requests (not too good paired with some types of RAID, where there are write penalties). This can generate an up to 2/1 write penalty on a write transaction. Since Win 7 and I believe ZFS just now are addressing this, its still somewhat edge.
There are some new instructions being written into SSDs as well to counterpoint these issues, as well as others, etc - again, as pointed out above.

For the consumer in the same price range, 15k SAS platters (and good 10k SATA) will generally outrun the current range of SSDs at a lower price per GB at any write IO profile, and sequentiel sweeping reads. 0
SSDs aren't a bad choice for OS boot drives, web environments which generally generate lots of random reads (hits), some. some types of lighter databases (not involving heavy random write transactions), etc.

If we are talking about RAID, RAID 0 is best when the stripe is aligned with the drive sector size, and the desired IO profile is taken into account. Generally, I've read that larger stripes benefit with SSD along with an adjusted sector size-though don't ask as I've not actually muddled with this, and this is nothing concrete. Though I can say, if you're going to pair the two up I would recommend HW RAID with decent write cache.

Personally, I would not use an SSD Raid 0 for backup. Since backup is a write profile, but generally 2nd tier, 5400 SATA RAID 5, 5+1, or 6 would be a more practical solution (again, this is what I would do). If you want the speed of SSD for backup I would recommend SAS as generally you're looking at more sequential writes (if you're doing scheduled jobs, etc), with minimal random access. If you most definitly want the gains of RAID 0, I would implement nested raid - RAID 10, 50 or 60, so you've got high avail and performance.

Oh yeah, I would do all this on a HW RAID controller with a BBU.
a c 127 G Storage
July 30, 2009 12:57:01 PM



Still alot better than the best consumer class HDD, the Velociraptor. So i don't understand your argument in writing.

Modern SSDs like Intel do not do 2-phase writes but they use free space so don't have to erase the block because its already erased. This is also the reason they require free space or they will be very slow when writing. For example if you filled your SSD to 100% then removed files but without TRIM support the SSD would not know that, and can't remap random writes to free flash blocks.

Maybe if you look at JMicron (JMF-602 controller) SSDs, they should only be used in Read-many-write-few situations, for example light desktop pcs for web browsing would be sufficient, as read latency still is a key advantage to HDDs even on cheap SSDs.

The adjusting of the sectorsize story is a way to involve all disk members in one I/O transaction. This very much hurts performance on SSDs and didn't gave meaningful performance advantages on HDDs as well. Its best if an I/O request can be handled by exactly one disk member, so that other disk members can be loaded with different I/O requests at the same time, and process them in parallel. If you don't do this, you may have proper sequential speeds, as they are buffered anyway, but you won't have an increase in IOps performance which is key.

An intel SSD can do already 35.000 random read IOps. With RAID0 you can lift that to 100.000+ so enterprise-level performance is now in the reach of casual consumers. That's pretty remarkable honestly. HDDs totally get nuked by this kind of (low-level) performance:

July 30, 2009 6:52:37 PM

enser said:
Eh, I'll toss in my overly long 2cents...

Current SSDs have been proven optimal for specific I/O patterns, generally small random reads. Additionally, as pointed out - apps and OS will queue a write then read-request - which was created for the typical limitation of seek time of hard drives that SSDs do not have; hence the SSD queue gets filled with odd read/write requests (not too good paired with some types of RAID, where there are write penalties). This can generate an up to 2/1 write penalty on a write transaction. Since Win 7 and I believe ZFS just now are addressing this, its still somewhat edge.
There are some new instructions being written into SSDs as well to counterpoint these issues, as well as others, etc - again, as pointed out above.

For the consumer in the same price range, 15k SAS platters (and good 10k SATA) will generally outrun the current range of SSDs at a lower price per GB at any write IO profile, and sequentiel sweeping reads. 0
SSDs aren't a bad choice for OS boot drives, web environments which generally generate lots of random reads (hits), some. some types of lighter databases (not involving heavy random write transactions), etc.

If we are talking about RAID, RAID 0 is best when the stripe is aligned with the drive sector size, and the desired IO profile is taken into account. Generally, I've read that larger stripes benefit with SSD along with an adjusted sector size-though don't ask as I've not actually muddled with this, and this is nothing concrete. Though I can say, if you're going to pair the two up I would recommend HW RAID with decent write cache.

Personally, I would not use an SSD Raid 0 for backup. Since backup is a write profile, but generally 2nd tier, 5400 SATA RAID 5, 5+1, or 6 would be a more practical solution (again, this is what I would do). If you want the speed of SSD for backup I would recommend SAS as generally you're looking at more sequential writes (if you're doing scheduled jobs, etc), with minimal random access. If you most definitly want the gains of RAID 0, I would implement nested raid - RAID 10, 50 or 60, so you've got high avail and performance.

Oh yeah, I would do all this on a HW RAID controller with a BBU.

in my real world testing of the cheap kingson v-series, i noticed a drastic improvement in response time, ingame loads are completely gone in crysis ( i thought this was the GPUs fault originally ), put new faith in my 8800gpu.
i wouldnt use a ssd to read and write constantly yet, i install the programs i use the most on it, and have an additional hdd for downloads, seldom used programs, and media, but if you know what you like to use on your computer and play games or use photoshop, office ect. its a godsend. SSD is the future, it really starts to feel like plug and play.
July 30, 2009 7:20:12 PM

rcpratt said:
omnomnom, crappy cheap SSDs :) 


yeah, just one SSD will be enough for me.

i like the sounds hard drives make. 47 GB UW SCSI Seagate 5 1/4" drive sounds like a bass guitar. my 74 GB Raptor makes a cool sound, plus it's fast.

having used hard drives that made cool sounds for 20 years, maybe i'll be a customer when somebody comes up with a utility to add hard drive noises to an SSD.

but the main thing is, they are FAST and expensive. probably by the time i spring for Core i7, it'll be another year gone by. $200 for 200 GB - on an SSD - think that's possible ?
July 30, 2009 7:25:47 PM

They'll probably be 1$/GB by 2011. But I think it's just weird that some of you enjoy HDD noises ;) 
July 30, 2009 7:36:14 PM

When the hard drive grinds away I at least know it's working hard, and not some other problem causing the computer to be sluggish. Of course, it would be nice not to have the slowdown in the first place! Plus I don't have to look down from the monitor at the HDD access light to see what's going on.
a c 127 G Storage
July 30, 2009 11:25:06 PM

Well that's why you have nice monitor gadgets for both windows and linux. In linux, you can also see the IOwait percentage, which tells you very quickly if the disk is the bottleneck ( > 90 % IOwait) or the CPU.

Some screens:
http://ubuntu-tutorials.com/2008/06/20/at-a-glance-syst...

I too like to know what my system is doing if i'm waiting and nothing happens, i want to see its bottleneck by the CPU or Disk or Network or its not working at all! So this is a pretty good substitute for those used to harddrive seek sounds as a measure of system load (because the harddisk is the ultimate bottleneck - whenever the system is not instantly responding its mostly the disk that is slow).

Richy0money: don't feel bad about your SSD. It can't beat an Intel SSD, but also cheap SSDs have the advantage of low latencies so booting/application-loading goes very fast. Random write is very bad still, but you might not be doing that alot. And if the stuttering problem (caused by write latencies going skyroof) is gone, it may be a good product for light desktop systems. Even the worst SSD is still better in tasks like booting or application loading (basically random read performance) than any HDD can ever be.

Its just that with the new Intel pricecuts the best product is becoming quite cheap, while the cheap products don't become cheaper. OCZ only lowered its price because it was forced by intel, as not lowering the price would cost OCZ marketshare.
July 31, 2009 7:11:25 AM

sub mesa said:
http://images.anandtech.com/graphs/intelx25mg2perfpreview_072209165207/19508.png

Still alot better than the best consumer class HDD, the Velociraptor. So i don't understand your argument in writing.

Modern SSDs like Intel do not do 2-phase writes but they use free space so don't have to erase the block because its already erased. This is also the reason they require free space or they will be very slow when writing. For example if you filled your SSD to 100% then removed files but without TRIM support the SSD would not know that, and can't remap random writes to free flash blocks.

Maybe if you look at JMicron (JMF-602 controller) SSDs, they should only be used in Read-many-write-few situations, for example light desktop pcs for web browsing would be sufficient, as read latency still is a key advantage to HDDs even on cheap SSDs.

The adjusting of the sectorsize story is a way to involve all disk members in one I/O transaction. This very much hurts performance on SSDs and didn't gave meaningful performance advantages on HDDs as well. Its best if an I/O request can be handled by exactly one disk member, so that other disk members can be loaded with different I/O requests at the same time, and process them in parallel. If you don't do this, you may have proper sequential speeds, as they are buffered anyway, but you won't have an increase in IOps performance which is key.

An intel SSD can do already 35.000 random read IOps. With RAID0 you can lift that to 100.000+ so enterprise-level performance is now in the reach of casual consumers. That's pretty remarkable honestly. HDDs totally get nuked by this kind of (low-level) performance:

http://images.anandtech.com/graphs/intelx25mg2perfpreview_072209165207/19506.png


Hm, I made 2 'arguments' I suppose. Neither was a focused slam on SSD, and my goal was to address the issue of SSD in a RAID 0 for backup purposes.
Basically the first one was that SSDs are currently, great for certain IO profile(s), but show contention in others - however this storage platform is evolving constantly. You have shown that in your reply. Thanks for the updated article to show that there is new tech being released that is addressing issues with the some specific write profiles (as even stated where these benchmarks are posted). Looks like random writes on the new gen of SSD is great, I would still like to see these synthetically bench-marked against 15k SAS for all profiles, as that is technically as consumer available as SSD, and interfaces are becoming more apparent in consumer motherboards, etc - more or less as that is what it is been traditionally pit against (and the raptors 2nd).
Though I have no doubt that SSD is a potential all purpose future of storage for consumer, and has its uses now.

The other was the general use of RAID and backup, and general drive practicality. Adjusting sectorsize story? RAID 0 stripe width and tweaking is a constant debate - but the fact remains it depends on what IO profile you are seeking. Generally speaking, RAID 0 stripe is going to end up being a factor of sector for a great deal of stripe optimization (i.e. if you are using clusters to base your stripe width) - for parallel process, contiguous IO to disk, and low seek - which is the desire of striping. Not attempting to create a blanket statement because of course you should choose the correct stripe for your expected IO profiles (and whether or not you can optimize within those boundaries), and results are going to depend on controller, disk...and adjust from there. Was not attempting to state a specific for those above reasons. As for the SSD stuff I left that open for someone else to conclude - as I'm not an up to the minute expert when it comes to SSDs, but have some experience with the, I suppose now, older tech.
Honestly, I think RAID 0 gets used a bit too much, not saying it doesn't have its place (esp if that data is more or less volatile - highly repurposeable consumer and some server OS drives, etc) especially when you don't see double the performance gains across the board in real world (as its generally marketed), and can achieve great performance results with actual RAID volumes if you toss in your coin for decent HW, and wish to keep your data around. Again it has its place, IO profiles, and if you implement for those properly, more power to you.

So the second point - are SSDs in RAID 0 good for backup? My personal opinion was most likely not at this point in time and not on RAID 0. Generally, backup can be treated as tier 2 storage, mainly involves sequential sweeping writes, and is a high availability operation. I did, however, want to give some alternative options...

Not going to stop anyone from flaming away...most likley departing from the thread, hope I was of some benefit. :) 

Thats it for my $.02 again, and apologies for the ramble! ;) 
July 31, 2009 7:15:14 AM

Richy0money said:
in my real world testing of the cheap kingson v-series, i noticed a drastic improvement in response time, ingame loads are completely gone in crysis ( i thought this was the GPUs fault originally ), put new faith in my 8800gpu.
i wouldnt use a ssd to read and write constantly yet, i install the programs i use the most on it, and have an additional hdd for downloads, seldom used programs, and media, but if you know what you like to use on your computer and play games or use photoshop, office ect. its a godsend. SSD is the future, it really starts to feel like plug and play.


Agreed - essentially I was using them in the same fashion and they worked great for me within said bounds. As their evolution continues, the next upgrade will be SSD in the laptop. :) 
a b G Storage
July 31, 2009 2:57:34 PM

hardware raid controllers are only great for parity based raid arrays not simple mirror/stripe operations like RAID1/0/10/1+0/0+1 etc
August 1, 2009 2:40:01 AM

apache_lives said:
hardware raid controllers are only great for parity based raid arrays not simple mirror/stripe operations like RAID1/0/10/1+0/0+1 etc


Scaling mean anything? You will see the wall fast when when you're diving into nested with onboard for this reason alone- regardless even for RAID 0, performance gain can be seen. If you want to dive into the territory of write-back on non parity (if allowed on the controller) and have the proper cache backup, you can see even more.
If you're doing only 2 drives it might not be completely necessary and you can go for the value add there, but again - my opinion, anything greater and I have my opinion.
Regardless...even based level of RAID mgmt, stripe adjustment, cache tweaking, and dynamic LUN potentials are worth the investment in my opinion.
a c 127 G Storage
August 1, 2009 9:16:58 AM

Scaling is something that works best on your host system, as its got the most powerful hardware. While in hardware RAID you probably hit a performance limit somewhere. Even two SSDs in an Areca in RAID0 will hit the maximum IOps boundary the areca can take (70.000), while software RAID would be able to exceed that.

So i would think software RAID could scale beyond hardware RAID, especially for SSDs. Actually, pairing an SSD with ICHxR southbridge and enable the "Write caching" option may be the fastest setup because you have both RAM writeback buffer and a very fast SSD for reading.
a c 415 G Storage
August 1, 2009 5:33:36 PM

sub mesa said:
So i would think software RAID could scale beyond hardware RAID, especially for SSDs.
...of course the downside is that it comes at the expense of available CPU cycles to run your applications, particularly for RAID-5. There must be a "sweet spot" somewhere at which you get the best combination of I/O and application performance, which would vary based on the application workload and the RAID controller throughput.
a c 127 G Storage
August 2, 2009 11:09:46 PM

True, but i would like to argue that I/O CPU usage is mostly irrelevant for home users.

If you're doing heavy I/O on a software RAID5, then that task is I/O bound anyway; the CPU won't be a bottleneck. Modern RAID5 drivers use more CPU cycles than simple drivers found on Windows, but lead to a faster transaction time because it speeds up I/O. So you want the CPU to be used, while now with a HDD the CPU can't work because its waiting all the time for I/O. What good is 1% cpu usage if it takes 3 times longer to complete?

In high-end servers this may be different, because it does both heavy CPU work, heavy disk I/O work and probably networking. In this case using hardware to offload certain tasks seems logical, to free up the host system because it does alot of parallel processing. However, this is likely not true for home users.
August 26, 2009 12:50:10 PM

If it helps:

Rig 2 - 2 Samsung 7200RPM SATA2 - 160Gb each (320Gb stripe), Raid 0 on nVidia raid controller (Asus MB)
Minimum transfer rate: 10.6 MB/sec
Maximum transfer rate: 113.6 MB/sec
Average transfer rate: 96.5 MB/sec
Average access time: 13.7 ms
Burst rate: 88.4 MB/sec
CPU usage: 3.9%

Rig 1 - 2 Patriot Warp v2 SSD - 64Gb each (128Gb stripe), Raid 0 on Intel raid controller (eVGA board)
Minimum transfer rate: 142.8 MB/sec
Maximum transfer rate: 268.0 MB/sec
Average transfer rate: 262.1 MB/sec
Average access time: 0.2 ms (yeah....that's right)
Burst rate: 2676.1 MB/sec
CPU usage: 1.9 %
September 20, 2009 10:04:26 PM

@vseven,

Shouldn't you be seeing better results with your raid 0 SSD setup?

Does the SATA 2 bottleneck apply? Or is this bottleneck only from each individual drive to the controller?
September 21, 2009 12:08:48 AM

I think it suffers the same effect most SSD's do....I get some stuttering with lots of random file writes. If I copy a 1 Gb file over my network to the drives it flies, if I copy 1,000 100k files it takes forever.
September 21, 2009 12:51:47 AM

vseven said:
I think it suffers the same effect most SSD's do....I get some stuttering with lots of random file writes. If I copy a 1 Gb file over my network to the drives it flies, if I copy 1,000 100k files it takes forever.


Thanks for this info vseven. I am already aware (as I think most who are curious about buying SSD(s))that the older JMicron controllers had stutter issues.

I was however referring to your average read speed numbers.

If you look here: http://www.overclock.net/hard-drives-storage/498027-rai...

2x60gb vertex drives in raid 0 (230MB/s x 2) actually resulted in 413 MB/s
yours being (175MB/s x 2) resulting in 262 MB/s

old controller? different memory used? just trying to figure out why its such a vast difference.

Thanks
September 21, 2009 3:16:05 AM

No idea, eVga x58 board, 12Gb of corsair RAM on it. I mean overall it's great but the write stutter is pretty horrible at times locking the machine up for 5 - 10 seconds sometimes. If I did it all over again I'd probably do 4x 2.5" 7200 drives for a little less money and better overall performance but oh well.
September 5, 2011 2:20:40 PM

spambi said:
So if I made a RAID 0 array using SSDs, would I have a super fast hard drive?

Would it be worth the money? I'm planning on backing up to my NAS, so I'm not too worried about data loss, although I hear that SSDs are pretty long lasting.


RAID with SSDs is not the best idea due to the parity writes trashing the wear-levelling algorithms. You're better using one to backup another and un-raiding them.
September 6, 2011 3:25:48 PM

I use ramdisk for most io intensive operations, way faster than anything lol.

with 8gb ram a virtual xp can barely run, might be awesome on workstation machine with 100+gb ram.

lightning fast you cant blink.

ppl say buying huge ram is a waste but I use every inch of it.
October 20, 2011 6:00:19 PM

My new HPT RR640 has TRIM enabled firmware. It's a 6Gb/s SATA RAID controller that can handle up to 4x SSD!

Opinions?

William

sminlal said:
Here's a an interesting question... the TRIM command is going to be important for SSDs running on Windows 7. But it's a brand new command that still isn't implemented in some SSDs.

I assume that RAID controller firmware is also going to have to be updated to pass a TRIM command issued by the OS through to the appropriate member drives. How can we tell if a particular RAID controller will actually do this?

October 20, 2011 7:28:00 PM

Only that this thread is fairly old and can die. Start a new thread please.
!