Sign in with
Sign up | Sign in
Your question
Solved

How many HD RAID disks equals SSD performances

Last response: in Storage
Share
December 24, 2009 6:43:07 PM

I've done a considerable amount of reading regarding flash drives and RAID, but I am unfortunately no close to an answer regarding this question.

How many disks in RAID (And in what RAID configuration) would it take to equal the performance of a single SSD?

I realize the word "peformance" is a subjective term and varies widely given the application. My question regards a normal desktop PC - used daily for browsing, gaming, watching videos, etc. I suppose this questions breaks down into two parts: Random and sequential peformance. Since I am talking about a desktop PC - there are a lot of random operations that occur (application loading mostly I would think). There's also a good bit of sequential operations required (copying files, watching large video files, etc). I know RAID HDs can have the advantage over SSDs regarding sequential read and especially with sequential write.

Will any amount of RAID hard drives have the advantage over SSD in the random catagory?

Thanks in advance and happy holidays to all!

More about : raid disks equals ssd performances

a b G Storage
December 24, 2009 7:10:00 PM

Well, if you had perhaps 20 or so Seagate Cheetah 15krpm drives and a good RAID controller, you could match an Intel SSD for IOPS. That's not the same thing though - there's really no way to match an SSD for access time (or low-queue depth IOPS), which is what makes a huge difference for single-user tasks.

Honestly, your question is hard to conclusively answer, but I can say that you won't approach the performance of an SSD with any reasonable RAID setup for most tasks.
m
0
l
a c 415 G Storage
December 24, 2009 7:16:22 PM

For booting, browsing and starting applications, NO amount of hard drives in ANY RAID configuration can EVER match an SSD. This is because RAID doesn't affect ACCESS TIMES, which are the critical metric for booting and starting applications. And SSDs have access times that are literally 100 times faster than a hard drive.

RAID CAN increase the number of concurrent I/Os, and this can be very useful under some scenarios, particularly in server environments. And it WILL increase the sequential transfer rate - but for desktops it really doesn't make a significant difference except for the times required to copy, read or write individual large files.

Note that watching videos should be a smooth experience no matter what kind of hard drive you have, because even the slowest hard drives are fast enough to keep up with a video viewed at normal speed.
m
0
l
Related resources
a b G Storage
December 24, 2009 9:13:57 PM

You do realize, the thing that makes an SSD so fast is it's almost 0 access time?
The more drives you put into a RAID array, the longer the access time becomes.
In short, what sminlal said right up there ^ is about as short and sweet as it gets.
m
0
l
a c 415 G Storage
December 24, 2009 11:19:05 PM

jitpublisher said:
The more drives you put into a RAID array, the longer the access time becomes.
Well, that's not really true either. In fact with a RAID-1 array the access time can actually improve a bit because you have two disks with copies of the same data and if the controller is smart it will issue the I/O request to the disk whose access arm is closest to the data. But with RAID 0 the access time is the same no matter how many disks you add. It's like owning 10 delivery trucks instead of just 1. It takes just as long for 10 delivery trucks to move stuff from LA to San Francisco as it takes 1 truck (same "access time"), but they can move 10 times as much stuff (more tons/day, equivalent to more MB/sec).
m
0
l
December 25, 2009 12:53:55 AM

Thanks for the quick responses. Guess I should be a bit more clear.

I do realize it is impossible to match an SSD's access time (the good ones at least). I also realize that raid will INCREASE the access time the more disks you add. HOWEVER, I'm currently running a 3 disk RAID 0 and it is able to peform random read/writes faster than any of my single disks. This is where part of my confusion comes in. (note this information regarding increase access times with RAID comes from here: http://www.tomshardware.com/reviews/RAID-SCALING-CHARTS... a bit old, but i would think it still applies).

Surely access time alone isn't what affects random read/writes times (boot up, application load, etc). IOPS play a role in this and at higher queue depths RAID should be able to get close to an SSD... right?

I suppose another part of my confusion is... with the exception of cost.... SSDs can't have hard drives beat in EVERY situation - can they? I mean, otherwise... why would any enthusiast ever make his boot drive anything but an SSD? For sequential read and write I believe that RAID HDs are still superior... but how often does that situation come up? As far as I can tell the only time sequential read/write occurs on a desktop PC is when copying files or opening large files. In essense... should the sequential read/write benchmark almost not even be considered for desktop PCs unless they are being dedicated to some specialized task?

Thanks again.
m
0
l
a b G Storage
December 25, 2009 12:57:53 AM

At extremely high queue depths, a large RAID array can approach the IOPS of an SSD. The problem with this statement is that most desktops will stay at a low queue depth at all times, and would not benefit from this advantage of RAID.

As for your question? Good SSDs do have hard drives beaten in basically every way, including sequentials (on a per-drive basis - a RAID array could beat SSDs in sequentials on a per-dollar basis). The reason that not everyone is running an SSD is because of cost and capacity. It's $200+ to get a good 80GB SSD, while it's only $100 or less to get a good 1TB HDD. People who care more about gaming performance than boot times, and who are on somewhat of a budget will put this differential into a better graphics card or CPU rather than spend it on storage, even though the SSD is unquestionably faster.
m
0
l
a c 415 G Storage
December 25, 2009 5:17:02 AM

As cjl stated, there just aren't that many tasks that a typical desktop user would do which could create a large enough queue of I/O requests for the higher concurrent I/O capability of a RAID set to make a very big difference. You could certainly manufacture some workloads to do it - for example you could create yourself a batch file that included a bunch of START statements to launch several heavy-duty applications at the same time. But few people actually do that, so as a general rule it's just not something that's relevant. But if it's something that YOU do then yes, a large RAID set might be worth it.

Aside from concurrent I/O, the one area where enough disks in a RAID configuration can definitely beat an SSD is in sequential transfer rates, particularly write rates (which tend to be the Achilles heel of SSDs). But as you mentioned, that's only relevant to certain rather specialized tasks such as large file copies or applications like video editing or processing lots of RAW camera photos.
m
0
l
December 25, 2009 1:09:14 PM

get an 80gb ssd , and put your games and os on it . for reliable storage , do a raid 1 with 1tb or if required 2tb drives ... you dont need the fastest or costliest drives for the raid .
m
0
l

Best solution

a b G Storage
December 26, 2009 3:55:55 AM

alphanode said:
Thanks for the quick responses. Guess I should be a bit more clear.

I do realize it is impossible to match an SSD's access time (the good ones at least). I also realize that raid will INCREASE the access time the more disks you add. HOWEVER, I'm currently running a 3 disk RAID 0 and it is able to peform random read/writes faster than any of my single disks. This is where part of my confusion comes in. (note this information regarding increase access times with RAID comes from here: http://www.tomshardware.com/reviews/RAID-SCALING-CHARTS... a bit old, but i would think it still applies).

Surely access time alone isn't what affects random read/writes times (boot up, application load, etc). IOPS play a role in this and at higher queue depths RAID should be able to get close to an SSD... right?

I suppose another part of my confusion is... with the exception of cost.... SSDs can't have hard drives beat in EVERY situation - can they? I mean, otherwise... why would any enthusiast ever make his boot drive anything but an SSD? For sequential read and write I believe that RAID HDs are still superior... but how often does that situation come up? As far as I can tell the only time sequential read/write occurs on a desktop PC is when copying files or opening large files. In essense... should the sequential read/write benchmark almost not even be considered for desktop PCs unless they are being dedicated to some specialized task?

Thanks again.


Actually, good SSD's like the Intel's have hard drives beat in every situation and by a big margin also. The only thing that is bad about the SSD is not being able to overwrite data without deletion but Trim has already tackled that problem very well.
SSD vs HDD
-Longer Life
-Faster Sequential Reads
-Faster Access Time
-Lower Power Consumption
-Longer Life
-More reliable
-Takes more damage since no moving parts
-Less heat
-Smaller
-Practically same writes and are improving with newer drives
-Faster random reads/writes
-Silent Operation

Am I missing something becuase looks to me like an SSD murders and HDD in every way possible.
Share
a b G Storage
December 26, 2009 4:44:52 AM

Well, you're missing that in a high demand application, especially one that is heavy on writes, an HDD will actually outlast an SSD. There's the capacity/price dilemma too. SSDs aren't perfect, but they are darn good.
m
0
l
a b G Storage
December 26, 2009 5:02:00 AM

so if an HDD beats an ssd in the price per GB and many writes thats 2 wins compared to the SSD's 12 wins in my list 12 vs 2. :)  SSD's demolish HDD's, and about apps heavy on writes, an SLC drive i think will handle well, since SLC can handle a crazy amount of write cycles. Moreover, ssds are expensive since they are new, wait a couple years, maybe even 5 years and we'll see prices gradually drop and we'll see more SLC based drives get out there and eventually we'll see hdd's fade, first in the consumer markets and eventually in the busniss sector as storage farms will be replaced by ssds becuase on a storage farm with tens-hundred of terabytes worth of space and devices, an SSD will be less power and heat meaning less energy consumption and less cooling required aswell as less noise in a corporate envirnment. So SSD's i think are the future and will fade HDD's out very quickly once they get cheaper within the coming years. If SSD's are close to perfect now, they will be perfect very soon.
m
0
l
a c 415 G Storage
December 26, 2009 5:43:51 AM

blackhawk1928 said:
So SSD's i think are the future and will fade HDD's out very quickly once they get cheaper within the coming years.
Don't forget that HDDs will increase in capacity for a given cost just as SDDs will. I'm guessing that SSDs will become mainstream for the OS over the next 2-3 years, but HDDs will continue to be the choice for bulk storage for perhaps a decade or more. Beyond that, it depends on the relative rates of improvement of the two technologies.
m
0
l
a b G Storage
December 26, 2009 2:05:08 PM

sminlal said:
Well, that's not really true either. In fact with a RAID-1 array the access time can actually improve a bit because you have two disks with copies of the same data and if the controller is smart it will issue the I/O request to the disk whose access arm is closest to the data. But with RAID 0 the access time is the same no matter how many disks you add. It's like owning 10 delivery trucks instead of just 1. It takes just as long for 10 delivery trucks to move stuff from LA to San Francisco as it takes 1 truck (same "access time"), but they can move 10 times as much stuff (more tons/day, equivalent to more MB/sec).



I am sorry, but I don't agree with you on this. It may not be a substantial amount, but generally adding more disks, even in RAID 1 will most usually slow access times. Data transfer speeds get better, but access times degrade. Of course there are exceptions, but generally speaking my statements are 100% accurate.
m
0
l
a c 415 G Storage
December 26, 2009 2:51:48 PM

Can you explain why you think the access times degrade?

For example, in a 2-volume RAID 0 set, any given I/O request has to be issued to one drive or the other. There's no difference in access times for the individual drives, and the CPU overhead to determine which drive to use is negligible. Why would the access time be longer?

And RAID 1 sets with an optimizing controller most definitely CAN have somewhat better READ access times. Writes are another story though, since the time to complete a write is the time for the drive that takes the longest.
m
0
l
a b G Storage
December 29, 2009 12:32:29 PM

sminlal said:
Can you explain why you think the access times degrade?

For example, in a 2-volume RAID 0 set, any given I/O request has to be issued to one drive or the other. There's no difference in access times for the individual drives, and the CPU overhead to determine which drive to use is negligible. Why would the access time be longer?

And RAID 1 sets with an optimizing controller most definitely CAN have somewhat better READ access times. Writes are another story though, since the time to complete a write is the time for the drive that takes the longest.


Sorry for the long delay in getting back to you, but here is a link .
http://www.tomshardware.com/reviews/RAID-SCALING-CHARTS...
m
0
l
a c 415 G Storage
December 29, 2009 4:14:48 PM

I see the numbers, but I don't understand them. The explanation in the article is that the access time of a multi-drive array suffers because "the access time in a RAID array is close to the longest access time of all the drives, plus protocol overhead". But there's no way that a more than doubling of access time could be explained by either of those. Remember that RAID 0 requires no additional I/O operations over a single drive, and that it's "protocol overhead" consists of just a few instructions that choose which drive to access.

I don't know if it's an issue with the benchmark or the RAID controller, but I just don't trust those numbers.
m
0
l
!