SSD and HDD both in raid ?
Is it possible to run raid 0 on 2 SSD and on 2 HDD at the same time.
You can have two raid 0 setups - one with the SSDs and one with the hard drives.
That being said, it's a foolish thing to RAID 0 SSDs, for two reasons:
1) SSDs get faster as they get larger in capacity, so just buy a bigger drive.
2) Don't take the double chance at failure when SSDs are already MORE than fast enough.
DarkSable said:That being said, it's a foolish thing to RAID 0 SSDs, for two reasons:
1) SSDs get faster as they get larger in capacity, so just buy a bigger drive.
Why do you think larger SSDs are faster? It is because SSDs already employ RAID0 or interleaving internally. A simple consumer SSD is actually a 16-disk RAID0 or even RAID5 for the upper class SSDs.
While most SSDs are limited by their controller or the SATA interface; the only way to increase performance even further is using multiple SSDs in host RAID0 so that you are no longer limited to the speed of one SATA/600 connection.
Combining a HDD and SSD in RAID0 is pointless of course. This will only share the bad traits: the RAID will have small size because of the SSD, and low performance because of the HDD.
First, I’ve used raid0 on ALL of my builds since the late nineties. And Yes a few MBs allowed Raid0 for IDE drives (Sata drives were not even out).
There is a major diff between Raid0 on HDDs and on SSDs and that is on usages. HDDs held OS + Programs + (and here is the diff) + all your data. Depending on individual usage, Your data consisted of small files and Large data structures such as a gazillion large jpeg/bitmap photos, large video files (a single .vob is typically 1 gig and a blu-ray file can be up to 35 gigs for a single file, then for some (like me) large spreadsheets, cad/cam drawings.
Currently SSDs are normally used as an OS + program drive (for gamers maybe a few maps), But the vast majority of used data is placed on the HDD.
Sub mesa is correct in that SSDs employ raid0, so it makes sense to assume Raid0 for a pair of SSD Will jump performance – NOT quite so simple. Look at what Raid0 primarily does:
.. 1) Good Boost to Sequential performance.
.. 2) NO decrease in access time.
OK – Sequential performance is the LEAST important parameter for an OS + Program drive. BUT is important when working with Large data structures. Small file 4 K random performance is what is important for an OS + Program drive and this is governed by Access time, Hence very little performance gain.
Short explanation – For a OS + Programs over half of the files are 32K and the typical stripe size for a raid0 array is 64K. For the files called during an OS Load, or a program load this percentage of files less than 64 K is even much greater. For this reason raid0 makes little sense for an OS + Program drive.
NOTE: For a Storage (Your data) drive be it a pair of SSDs or HDDs, Raid0 can be advantageous especially if working with large data structures. But this is mostly dependent on the user as they determine what files are most often used.
Bottom Line: For OS + program go with either a single larger SSD, or two separate SSDs (one for OS + Programs and one for the Files you most often used). HDD(s) - Your choice, and Yes Raid0, may be the better choice. NOTE: Typical LOW end HDDs are NOT recommended for Raid0
PS: You can improve the Access time for HDDs using Raid0 by doing a “short Stroke” This is where only you use about 20->30 percent of the drive for the array. EX for a pair of WD 640G Blacks, the access time can be decreased to about 9.5 millisecond verses 12.6 milliseconds for a single drive. DOWNSIDE (Nothing free) - You lose 70->80% of the drive capacity to increase random 4K. THIS DOES NOT WORK for SSDs.
Quote:Sub mesa is correct in that SSDs employ raid0, so it makes sense to assume Raid0 for a pair of SSD Will jump performance – NOT quite so simple. Look at what Raid0 primarily does:
.. 1) Good Boost to Sequential performance.
.. 2) NO decrease in access time.
Somehow, many people think that RAID0 only improves sequential I/O. However, this is simply not true!
It could be due to various reasons, such as misalignment issues and too low stripesize, that the improvement in IOps performance is not apparent for all RAID0 configurations.
But RAID0 can double sequential performance just as easily as double IOps performance. RAID0 can decrease average access time by half. Only for sub-optimal configurations will RAID0 not provide a performance benefit to IOps performance.
You are right that sequential performance increase for a desktop system is not that important; IOps is far more important. The theory behind RAID0 is simple: while disk1 is handling a random read, disk2 can process another I/O request. If utilised properly, this allows overall performance to be doubled for both sequential and random I/O.
The only limitation of RAID0 is that it cannot improve single queue depth random reads. It cannot predict random workloads and it cannot know what the next requested random fragment would be. So in this scenario only one drive out of the RAID0 pool can be utilised at any particular time. This applies to SSDs just as well, which is why the 4K random read performance is always around ~25MB/s. This score cannot be improved with RAID0 interleaving.
However, when you look at multiqueue random reads, you will see that SSDs have far higher performance, like 250MB/s-350 or a factor 10 better. The maximum factor here is 16 due to the 16-way interleaving (read: the SSD is actually a RAID0 of 16 small SSDs).
For some specific RAID0 benchmarks, please read my (unfinished) article:
Concur with much of what you said, However I stand by what I stated as that the AVERAGE used will see very little performance gain for 4K random access. May be mistaken, but back of my mind for average queue depth is around 4.
For HDDs access time (seek time included) is also a factor of RPMs and magnetic domain density and where the data is located on the platter. These are physical constrains and NOT affected by single drive vs raid0 drive configurations.
"Synthetic" Benchmarks for SSDs have become of little importance for the typical user as they do NOT scale to real life day-2-day usage. Reviews of SSDs have pretty much stopped using PC mark Vantage because it was hard to differentiate performance Differences.
To illustrate: My 256 Gig Crucial M4 overall score for AS SSD was approx. 750 while for my 256 gig Samsung 840 it was 1100. That was a 47% jump based on Benchmark, But in real life I saw less than a 10% jump in performance. ie Boot time decreased from about 13 Sec to 12 Sec. program load times, my eyeballs (or stopwatch) could not measure the change - So even if it was 100x faster, if you can not see it, the effect is zero.
ie click on a spreadsheet link and program opens with spreadsheet - need to try Word with a 300 page doc.
sub mesa - I'm Not knocking what you are saying, and you have some great research backing up what you have said, I'm just saying that the typical user (on here it seems to be gamer (I'm not a gamer) will not see a big enough change in Raid0 for a OS + program drive to warrant it. HDDs and user data, different story
Well I do not disagree with what you just said either. I just like RAID0 very much and will defend its glory when needed. ;-)
Seriously, it is interesting to investigate why RAID0 does improve performance in theory but doesn't seem to live up to that promise. I can think of several issues that might be the cause for this:
1) First and foremost, RAID0 *DOES* do its job. Without RAID0 altogether, SSDs would be quite slow like 110MB/s read and 7MB/s write sequential speeds, and much lower random I/O performance as well.
2) With a single SSD already being a 16-way RAID0, turning it into a 32-way RAID0 by combining two SSDs using host RAID0 will already accentuate the diminishing returns. Going from 1 to 4 channels gives more benefit than going from 128 to 256, for example.
3) For desktop users the most important performance specification is not sequential read/write of random write; but random read with single queue depth. About half of all I/O is this kind of access. Exactly the kind of access I/O that RAID0 cannot improve. Even a single SSD will run at 1/16th speed at ~25MB/s random read performance which is about the same for ALL SSDs. This is also why performance differences between SSDs are often not noticeable by end-users. Modern SSDs all perform about the same for this access type which is bottlenecked by latency and cannot be improved by RAID0 or multichannel I/O.
4) In the cases where RAID0 several SSDs does give a benefit, the user might not notice it. A single SSD is already hundred to a thousand times faster than a harddrive, meaning the CPU will not have to endlessly wait for the harddrive to finally return some 4K random fragment. Instead, an array of SSDs will bring your CPU down to its knees and be stuck in turbo-core full load on one CPU core (*)
5) In the remaining cases where the system is actually improving performance, the user might not notice it. Generally it takes 40% performance increase for end-users to distinguish between placebo-effect (people only think it is faster).
In the past RAID0 was often crippled because Windows used partitions at 31.5K which is the worst kind of misalignment one could create. The result was that all Windows RAID0 in the past only doubled the sequential I/O but actually increased the access time; because now two harddrives have to seek to a single position instead of only one harddrive. Such RAIDs were crippled and of little use except for faster transfer of large files.
In many of the tasks you describe; the SSD can only use one channel and the internal RAID is powerless: booting and launching applications. Two very common tasks that cannot be improved by RAID0. So even a single SSD is much less fast with this kind of tasks than it could have been, should the pattern be predictable.
The funny thing is that all these strengths and limitations of RAID0 also apply to similar interleaving technologies: multi-lane PCI-express, dual channel DDR, multi-core processors, SLI videocards. Most struggle with the same kind of limitations, but nevertheless interleaving and parallel operation lies at the core of most performance increases found in modern computer systems.
The multi-core processor is a good example. It has basically the same problems: not all programs can utilise all cores effectively. Many will only use one core and thus single-core-performance is a very important aspect of your CPU not just the combined performance of all cores. This means an 8-core 1GHz is worse for a desktop than a 2GHz dualcore. For servers this is often the other way around since its applications are often suitable of parallel execution.
So if you ask me whether RAID0 is worth it, I would argue that the very simplicity of RAID0 lies at the core of many performance improvements in your computer. Whether people choose a single larger SSD or multiple smaller SSDs in RAID0 might not change that much on actual performance experience. But the same may apply to dualcore versus quadcore on the average desktop. But people are content at paying twice for effectively a very small increase in actual performance. With SSDs in RAID0, you pay only 10% more to get a potential 100% additional performance. I believe that is worth it. I'm a guy that always wants the maximum gain for the lowest cost. But consumers buying a single large SSD are not missing much at all, and probably have an easier time because they skip a host RAID layer altogether.
I loved raid0 also. All my desktop system starting in the late 1990's used raid0. That was with "SLOW IDE HDDs. last system I used raid0 on was my E6400 (OCed to 3.2) that had Raid0 on one pair of drives for XP, and one pair with Vista (Later swapped the Vista out for Win 7 Beta. Also checked out short stroke which gave a Nice additional boost.
Thoes that say - NO, because of possibility of single drive failure (the double your odds of a failure), well in the many years I used it I had only ONE failure and that was after about 10 Years of use. However the newer breed of Consumer HDDs, Not so sure reason I recommend enterprise HDDs when using raid0.
Tend to agree on hitting the point of diminishing returns on raid0 for SSDs outside of the sequential performs.
Take care and enjoy.
Added. Even without the internal raid0 for SSDs, their Random 4 K performance would be much higher than a HDD do to the tenths of a Millisec access Time vs the HDDs with an Average of 9 (10K RPMs)->12 millisec for 7200 RPM drives.
1) Is this for SPEED (Raid0), redundancy (RAID1) or both?
2) I recommend a single SSD for Windows.
3) 2xHDD (RAID0) for games is what I used to do.
I've completely changed how I do things now, partially due to STEAM (for games) allowing another folder. For games that benefit from SSD's I have a dedicated SSD though many people could use a much SMALLER one:
Drive#1 - 128GB Samsung PRO (Windows + Apps)
Drive#2 - 256GB samsung PRO (Games drive only; 2nd Steam folder and some non-Steam Games)
Drives #3,#4: 2x(3TB Seagate) RAID1 for redundancy
How this works:
I see NO reason to RAID the Windows drive. It's plenty fast and RAID0 causes issues. RAID1 is not required and a waste of money because I BACKUP with Acronis True Image 2013 automatically.
I have a lot of games and important info so I decided to RAID1 with two, 3TB drives. My STEAM folder is there and performance in games is quite good.
SSD and GAMES:
Most games don't benefit that much anyway but some do. I notice a difference in SKYRIM loading and other games.
STEAM and 2nd folder on SSD:
*This may be the coolest thing. Have a LARGE number of games but want best drive performance? Simply MOVE the games you currently play to the SSD. Even a 60GB SSD is good for a few games (and how many do you play?).
1) Backup the game in Steam.
2) Delete the game in Steam.
3) Restore the game (to 2nd Steam folder on the SSD)
4) Delete the backup if you wish via Windows Explorer.
- Windows on a single SSD (no RAID)
- backup Windows periodically (Acronis TI)
- single HDD or 2xRAID1 for redundancy
- SSD as second STEAM folder?
- can MOVE Steam games easily from HDD to SSD and back
RetiredChief said:I loved raid0 also. All my desktop system starting in the late 1990's used raid0. That was with "SLOW IDE HDDs. last system I used raid0 on was my E6400 (OCed to 3.2) that had Raid0 on one pair of drives for XP, and one pair with Vista (Later swapped the Vista out for Win 7 Beta.
Well the sad thing is that in the early days when we needed the performance boost from RAID0 so desperately, it never really lived up to its promise. In many cases, people ended up buying RAID controllers with PCI interface for their Windows systems. This resulted in multiple problems:
1) the misaligned partitions that Windows creates. It took until Vista SP1 when Microsoft started creating proper partitions at 1024K or 100M offset
2) the controller was connected through PCI, which meant a bottleneck because at this time people used the PCI bus for multiple things like sound, USB and other addon controllers.
3) in case of Parallel ATA, the drives connected to the same cable would slow eachother down because this cable was not meant for parallel I/O despite its name; only the signalling is parallel.
4) The Promise FastTrak TX150 was a popular controller. Like many cheap controllers, it was FakeRAID with a simple controllerchip but with Windows drivers that provide the RAID functionality. Promise in particular had 'dirty' RAID drivers with an ugly hack to always read the full stripesize even if a fraction of the stripesize was requested. This improved some low-end benchmarks but was bad for real applications due to higher latency and lower IOps.
Today things are better since the Intel onboard RAID provides good Windows drivers with plenty of features and generally decent performance - still worse than Linux and BSD though. One key feature of the Intel onboard RAID drivers is that they allow to run RAID-arrays in 'Volume write-back mode' which meant that the host RAM is utilised as write-back buffercache. Much the same way true Hardware RAID controllers with dedicated RAM memory can use write-buffering.Quote:Thoes that say - NO, because of possibility of single drive failure (the double your odds of a failure), well in the many years I used it I had only ONE failure and that was after about 10 Years of use.
Well, I always answer with a bit of logic of my own. Starting with: doubling your failure rate is not very significant. We do not fear buying quad-core processors because it has four times as many parts to break. It may be true that a larger processor with more 'real-estate' is more prone to failure, however, due to the very low failure rate during normal usage even a 10-times or 100-times higher failure rate doesn't change that much. However, harddrives have failure rates in a couple of magnitudes higher.
My point is that whether your harddrive is 0.25% or 1.50% failure rate, doesn't really change one bit about the fact that you need to protect your data anyway. Even the 0.25% is a million times too high to be really reliable. So you need to protect yourself with backups regardless of running single disk or RAID0.
Additionally, users who run RAID0 arrays probably know the risk, and have taken proper steps to protect their data through backups instead. However, those who do not use RAID0 'because it is unsafe' are far more likely to trust on their RAID5 array for example, which may lead to a false sense of security. They may have backups, but do not update them or pay too much attention to, because they feel as being protected already by their RAID5. In this case, those running RAID0 may actually protect their data better than those running RAID5, in that I assume that the RAID0 group has better backups.