RAID 0 + SSD = super fast hard drive?

Status
Not open for further replies.

spambi

Distinguished
Jun 9, 2009
4
0
18,510
So if I made a RAID 0 array using SSDs, would I have a super fast hard drive?

Would it be worth the money? I'm planning on backing up to my NAS, so I'm not too worried about data loss, although I hear that SSDs are pretty long lasting.
 

rcpratt

Distinguished
Jun 30, 2009
71
0
18,630
Sure, it would be fast. SSDs are pretty darn quick themselves though, you may find one is all you need.

If it's worth the money is entirely up to your situation.
 
Here's a an interesting question... the TRIM command is going to be important for SSDs running on Windows 7. But it's a brand new command that still isn't implemented in some SSDs.

I assume that RAID controller firmware is also going to have to be updated to pass a TRIM command issued by the OS through to the appropriate member drives. How can we tell if a particular RAID controller will actually do this?
 

Richy0money

Distinguished
Jul 29, 2009
43
0
18,530
until we see it, i dont think TRIM matters, by the time it is important, ssd will be a staple. besides im sure we can always flash our old ssd disks if we still even want to bother with them.

or we may look back on TRIM and laugh, lol blast processing.
 

Richy0money

Distinguished
Jul 29, 2009
43
0
18,530

nope it sure wouldnt be worth it, maybe next year, but if your curious, check out the new kingston v-series drives.
 

sub mesa

Distinguished
Its better to buy an SSD with a good controller (like the Intel X25-M) than cheaper SSDs and put them in RAID0. Since SSDs use interleaving or "RAID0" internally already, buying a better SSD may give you more performance benefit than when using RAID0. Especially hardware RAID will not scale well with SSDs. Areca is bound to 70.000 IOps while a single SSD can already do 35.000 IOps. So you'll hit a bottleneck pretty soon, unless you pick Software RAID0.
 

sub mesa

Distinguished
The kingston is using JMicron JMF-602 variation by the way, so don't buy that drive.

Basically, the JMicron JMF602 controller has been laser etched with the Toshiba name as Kingston didn't want consumers to see the JMicron name and think this drive would stutter.

http://www.legitreviews.com/article/1005/2/
 

enser

Distinguished
Jul 23, 2009
11
0
18,520
Eh, I'll toss in my overly long 2cents...

Current SSDs have been proven optimal for specific I/O patterns, generally small random reads. Additionally, as pointed out - apps and OS will queue a write then read-request - which was created for the typical limitation of seek time of hard drives that SSDs do not have; hence the SSD queue gets filled with odd read/write requests (not too good paired with some types of RAID, where there are write penalties). This can generate an up to 2/1 write penalty on a write transaction. Since Win 7 and I believe ZFS just now are addressing this, its still somewhat edge.
There are some new instructions being written into SSDs as well to counterpoint these issues, as well as others, etc - again, as pointed out above.

For the consumer in the same price range, 15k SAS platters (and good 10k SATA) will generally outrun the current range of SSDs at a lower price per GB at any write IO profile, and sequentiel sweeping reads. 0
SSDs aren't a bad choice for OS boot drives, web environments which generally generate lots of random reads (hits), some. some types of lighter databases (not involving heavy random write transactions), etc.

If we are talking about RAID, RAID 0 is best when the stripe is aligned with the drive sector size, and the desired IO profile is taken into account. Generally, I've read that larger stripes benefit with SSD along with an adjusted sector size-though don't ask as I've not actually muddled with this, and this is nothing concrete. Though I can say, if you're going to pair the two up I would recommend HW RAID with decent write cache.

Personally, I would not use an SSD Raid 0 for backup. Since backup is a write profile, but generally 2nd tier, 5400 SATA RAID 5, 5+1, or 6 would be a more practical solution (again, this is what I would do). If you want the speed of SSD for backup I would recommend SAS as generally you're looking at more sequential writes (if you're doing scheduled jobs, etc), with minimal random access. If you most definitly want the gains of RAID 0, I would implement nested raid - RAID 10, 50 or 60, so you've got high avail and performance.

Oh yeah, I would do all this on a HW RAID controller with a BBU.
 

sub mesa

Distinguished
19508.png


Still alot better than the best consumer class HDD, the Velociraptor. So i don't understand your argument in writing.

Modern SSDs like Intel do not do 2-phase writes but they use free space so don't have to erase the block because its already erased. This is also the reason they require free space or they will be very slow when writing. For example if you filled your SSD to 100% then removed files but without TRIM support the SSD would not know that, and can't remap random writes to free flash blocks.

Maybe if you look at JMicron (JMF-602 controller) SSDs, they should only be used in Read-many-write-few situations, for example light desktop pcs for web browsing would be sufficient, as read latency still is a key advantage to HDDs even on cheap SSDs.

The adjusting of the sectorsize story is a way to involve all disk members in one I/O transaction. This very much hurts performance on SSDs and didn't gave meaningful performance advantages on HDDs as well. Its best if an I/O request can be handled by exactly one disk member, so that other disk members can be loaded with different I/O requests at the same time, and process them in parallel. If you don't do this, you may have proper sequential speeds, as they are buffered anyway, but you won't have an increase in IOps performance which is key.

An intel SSD can do already 35.000 random read IOps. With RAID0 you can lift that to 100.000+ so enterprise-level performance is now in the reach of casual consumers. That's pretty remarkable honestly. HDDs totally get nuked by this kind of (low-level) performance:

19506.png
 

Richy0money

Distinguished
Jul 29, 2009
43
0
18,530

in my real world testing of the cheap kingson v-series, i noticed a drastic improvement in response time, ingame loads are completely gone in crysis ( i thought this was the GPUs fault originally ), put new faith in my 8800gpu.
i wouldnt use a ssd to read and write constantly yet, i install the programs i use the most on it, and have an additional hdd for downloads, seldom used programs, and media, but if you know what you like to use on your computer and play games or use photoshop, office ect. its a godsend. SSD is the future, it really starts to feel like plug and play.
 

Raviolissimo

Distinguished
Apr 29, 2006
357
0
18,780


yeah, just one SSD will be enough for me.

i like the sounds hard drives make. 47 GB UW SCSI Seagate 5 1/4" drive sounds like a bass guitar. my 74 GB Raptor makes a cool sound, plus it's fast.

having used hard drives that made cool sounds for 20 years, maybe i'll be a customer when somebody comes up with a utility to add hard drive noises to an SSD.

but the main thing is, they are FAST and expensive. probably by the time i spring for Core i7, it'll be another year gone by. $200 for 200 GB - on an SSD - think that's possible ?
 

donpacific2k

Distinguished
Dec 23, 2008
130
0
18,690
When the hard drive grinds away I at least know it's working hard, and not some other problem causing the computer to be sluggish. Of course, it would be nice not to have the slowdown in the first place! Plus I don't have to look down from the monitor at the HDD access light to see what's going on.
 

sub mesa

Distinguished
Well that's why you have nice monitor gadgets for both windows and linux. In linux, you can also see the IOwait percentage, which tells you very quickly if the disk is the bottleneck ( > 90 % IOwait) or the CPU.

Some screens:
http://ubuntu-tutorials.com/2008/06/20/at-a-glance-system-monitoring-with-panel-applets/

I too like to know what my system is doing if i'm waiting and nothing happens, i want to see its bottleneck by the CPU or Disk or Network or its not working at all! So this is a pretty good substitute for those used to harddrive seek sounds as a measure of system load (because the harddisk is the ultimate bottleneck - whenever the system is not instantly responding its mostly the disk that is slow).

Richy0money: don't feel bad about your SSD. It can't beat an Intel SSD, but also cheap SSDs have the advantage of low latencies so booting/application-loading goes very fast. Random write is very bad still, but you might not be doing that alot. And if the stuttering problem (caused by write latencies going skyroof) is gone, it may be a good product for light desktop systems. Even the worst SSD is still better in tasks like booting or application loading (basically random read performance) than any HDD can ever be.

Its just that with the new Intel pricecuts the best product is becoming quite cheap, while the cheap products don't become cheaper. OCZ only lowered its price because it was forced by intel, as not lowering the price would cost OCZ marketshare.
 

enser

Distinguished
Jul 23, 2009
11
0
18,520


Hm, I made 2 'arguments' I suppose. Neither was a focused slam on SSD, and my goal was to address the issue of SSD in a RAID 0 for backup purposes.
Basically the first one was that SSDs are currently, great for certain IO profile(s), but show contention in others - however this storage platform is evolving constantly. You have shown that in your reply. Thanks for the updated article to show that there is new tech being released that is addressing issues with the some specific write profiles (as even stated where these benchmarks are posted). Looks like random writes on the new gen of SSD is great, I would still like to see these synthetically bench-marked against 15k SAS for all profiles, as that is technically as consumer available as SSD, and interfaces are becoming more apparent in consumer motherboards, etc - more or less as that is what it is been traditionally pit against (and the raptors 2nd).
Though I have no doubt that SSD is a potential all purpose future of storage for consumer, and has its uses now.

The other was the general use of RAID and backup, and general drive practicality. Adjusting sectorsize story? RAID 0 stripe width and tweaking is a constant debate - but the fact remains it depends on what IO profile you are seeking. Generally speaking, RAID 0 stripe is going to end up being a factor of sector for a great deal of stripe optimization (i.e. if you are using clusters to base your stripe width) - for parallel process, contiguous IO to disk, and low seek - which is the desire of striping. Not attempting to create a blanket statement because of course you should choose the correct stripe for your expected IO profiles (and whether or not you can optimize within those boundaries), and results are going to depend on controller, disk...and adjust from there. Was not attempting to state a specific for those above reasons. As for the SSD stuff I left that open for someone else to conclude - as I'm not an up to the minute expert when it comes to SSDs, but have some experience with the, I suppose now, older tech.
Honestly, I think RAID 0 gets used a bit too much, not saying it doesn't have its place (esp if that data is more or less volatile - highly repurposeable consumer and some server OS drives, etc) especially when you don't see double the performance gains across the board in real world (as its generally marketed), and can achieve great performance results with actual RAID volumes if you toss in your coin for decent HW, and wish to keep your data around. Again it has its place, IO profiles, and if you implement for those properly, more power to you.

So the second point - are SSDs in RAID 0 good for backup? My personal opinion was most likely not at this point in time and not on RAID 0. Generally, backup can be treated as tier 2 storage, mainly involves sequential sweeping writes, and is a high availability operation. I did, however, want to give some alternative options...

Not going to stop anyone from flaming away...most likley departing from the thread, hope I was of some benefit. :)

Thats it for my $.02 again, and apologies for the ramble! ;)
 

enser

Distinguished
Jul 23, 2009
11
0
18,520


Agreed - essentially I was using them in the same fashion and they worked great for me within said bounds. As their evolution continues, the next upgrade will be SSD in the laptop. :)
 

enser

Distinguished
Jul 23, 2009
11
0
18,520


Scaling mean anything? You will see the wall fast when when you're diving into nested with onboard for this reason alone- regardless even for RAID 0, performance gain can be seen. If you want to dive into the territory of write-back on non parity (if allowed on the controller) and have the proper cache backup, you can see even more.
If you're doing only 2 drives it might not be completely necessary and you can go for the value add there, but again - my opinion, anything greater and I have my opinion.
Regardless...even based level of RAID mgmt, stripe adjustment, cache tweaking, and dynamic LUN potentials are worth the investment in my opinion.
 

sub mesa

Distinguished
Scaling is something that works best on your host system, as its got the most powerful hardware. While in hardware RAID you probably hit a performance limit somewhere. Even two SSDs in an Areca in RAID0 will hit the maximum IOps boundary the areca can take (70.000), while software RAID would be able to exceed that.

So i would think software RAID could scale beyond hardware RAID, especially for SSDs. Actually, pairing an SSD with ICHxR southbridge and enable the "Write caching" option may be the fastest setup because you have both RAM writeback buffer and a very fast SSD for reading.
 
...of course the downside is that it comes at the expense of available CPU cycles to run your applications, particularly for RAID-5. There must be a "sweet spot" somewhere at which you get the best combination of I/O and application performance, which would vary based on the application workload and the RAID controller throughput.
 

sub mesa

Distinguished
True, but i would like to argue that I/O CPU usage is mostly irrelevant for home users.

If you're doing heavy I/O on a software RAID5, then that task is I/O bound anyway; the CPU won't be a bottleneck. Modern RAID5 drivers use more CPU cycles than simple drivers found on Windows, but lead to a faster transaction time because it speeds up I/O. So you want the CPU to be used, while now with a HDD the CPU can't work because its waiting all the time for I/O. What good is 1% cpu usage if it takes 3 times longer to complete?

In high-end servers this may be different, because it does both heavy CPU work, heavy disk I/O work and probably networking. In this case using hardware to offload certain tasks seems logical, to free up the host system because it does alot of parallel processing. However, this is likely not true for home users.
 

vseven

Distinguished
Apr 24, 2009
4
0
18,510
If it helps:

Rig 2 - 2 Samsung 7200RPM SATA2 - 160Gb each (320Gb stripe), Raid 0 on nVidia raid controller (Asus MB)
Minimum transfer rate: 10.6 MB/sec
Maximum transfer rate: 113.6 MB/sec
Average transfer rate: 96.5 MB/sec
Average access time: 13.7 ms
Burst rate: 88.4 MB/sec
CPU usage: 3.9%

Rig 1 - 2 Patriot Warp v2 SSD - 64Gb each (128Gb stripe), Raid 0 on Intel raid controller (eVGA board)
Minimum transfer rate: 142.8 MB/sec
Maximum transfer rate: 268.0 MB/sec
Average transfer rate: 262.1 MB/sec
Average access time: 0.2 ms (yeah....that's right)
Burst rate: 2676.1 MB/sec
CPU usage: 1.9 %
 

jabbilabbi

Distinguished
Sep 20, 2009
3
0
18,510
@vseven,

Shouldn't you be seeing better results with your raid 0 SSD setup?

Does the SATA 2 bottleneck apply? Or is this bottleneck only from each individual drive to the controller?
 
Status
Not open for further replies.