Sign in with
Sign up | Sign in
Your question

SAS and SSD

Tags:
  • Hard Drives
  • SAS
  • NAS / RAID
  • Seagate
  • Storage
Last response: in Storage
Share
October 26, 2007 7:22:56 PM

hello,

initially i was thinking of getting two 36GB SAS of seagate and raid them in 0 for better preformance for my OS and Apps.
now i was going between 2 models of seagate (15k.4 and the savvio 15k)
now i see that the 15k.4 is not in production and the price of the 2.5" savvio 36GB is running around 340$.
i'm adding a good controller to the math (300$), and it comes to a 1000$!

now my question is if i'm aiming for almost a 1000$,
wouldn't i better with two SSD's (32GB each - 500$ each) and raid them in 0 as well for a better write speed,
cause i know they are a bit slow on write and a bit slow with read, but again, almost no seek time!!

i belive i read somewhere that a regular SATA controller that i have on the mobo anyway will raid them just as well.

will i get better preformance with this configuration?

and if yes, what drive shoud i get?

tnx.

More about : sas ssd

October 27, 2007 2:12:56 AM

WELL,
YOUR SUGUESTION IS TRULY AMAIZING.
but again, not available on the market at the moment.
but from the top of your head, do you have a solution for my issue at the moment?
Related resources
a c 187 G Storage
October 27, 2007 4:11:57 AM

My suggestion is to get a 150gb raptor for your os, and one for your apps. Then wait for some highly parallel flash drives like the fusionio to come to market. The cost of a 80gb drive is estimated to be about $2400. I see no reason why that can't come down, since flash memory is getting more affordable.

I once tried using some 15k scsi drives, but found that their performance was about the same. I was disappointed. Same thing for raid-0 using the mobo raid-0. The reality is that raid-0 shows up nicely in synthetic benchmarks, but may actually perform worse unless you have the perfect application. You might get better results with a good add-in sas raid controller with lots of memory. Look at some of the benchmarks on storagereview.com. The reason, I think is that the 15k drives are optimized for the server environment with lots of quick random reads. Whereas the single user desktop environment is characterized by quque lengths under 2, and longer predictable sequential reads.


October 27, 2007 4:41:09 AM

me: ponders what the hell kind of data outside of a data center would get someone to spend $1000 to take a few ticks off the clock.
October 27, 2007 6:34:13 AM

You have too much money, I like it.
Try this, it will blow anything your looking at away HyperDrive4 (Revision 3)
Quote:
It fires up Windows XP in 2 seconds from the splash screen to the desktop with nForce4/5 Mobos. It fires up Windows 2003 Server in 2 seconds (although in both cases it is hardware polling and device driver timing loops that take up most of that 2 seconds). So it is "instant on" and gives you an "instant desktop".
Does it get any faster? C'mon get your wallet out.
October 27, 2007 7:10:14 AM

From the benches on storagereview.com Raptors perform better than 15k drives when dealing with normal desktop tasks. The Raptors are targeted towards a desktop system (in firmware) and the 15k drives are targeted towards server usage patterns.

-mcg
October 7, 2009 2:12:43 PM

cbxbiker61 said:
me: ponders what the hell kind of data outside of a data center would get someone to spend $1000 to take a few ticks off the clock.


15k sas disks go for about $20-$40 apice new on ebay!!!
a c 127 G Storage
October 7, 2009 2:36:41 PM

don't go 15k SAS. go SSD instead. SAS disks running at 15.000 rpm will still only be able to handle up to 200 random IOps, while SSDs can handle tens of thousands of random IOps per second. So a HDD can never beat an SSD in this regard, and there's the choice between a SLOW HDD and a FAST SSD. Not even RAID0 can help get close to SSD performance levels.

And surely 2 Intel X25-M 80GB G2 disks running in RAID0 would be alot cheaper and ofcourse extremely fast, especially with Intel onboard RAID drivers with the Write Caching option enabled.
October 7, 2009 2:47:36 PM

sub mesa said:
don't go 15k SAS. go SSD instead. SAS disks running at 15.000 rpm will still only be able to handle up to 200 random IOps, while SSDs can handle tens of thousands of random IOps per second. So a HDD can never beat an SSD in this regard, and there's the choice between a SLOW HDD and a FAST SSD. Not even RAID0 can help get close to SSD performance levels.

And surely 2 Intel X25-M 80GB G2 disks running in RAID0 would be alot cheaper and ofcourse extremely fast, especially with Intel onboard RAID drivers with the Write Caching option enabled.


your x25-m goes for $230 last i knew that wasn't cheap for 80gb, and now your talking 2? for $460 bucks i could buy 18 sas disks, have more capacity and perform just as well .
a c 127 G Storage
October 7, 2009 2:54:18 PM

You said your budget was $1000. For that price you can either have awesome performance, or awesome capacity. You can't have both.

SAS = slow. Any HDD = slow, if you consider random I/O performance. Ah you won't believe me anyway, take a look at this graph:



source: http://www.anandtech.com/storage/showdoc.aspx?i=3607&p=...

The only HDD in this benchmark, the WD Velociraptor, is performing at a staggering 0.68MB/s. Say a SAS disk can do 1MB/s at most, 10 of them would reach 5MB/s using RAID0 (50% efficiency). So even 10 of the fastest harddrives cannot even come close to one good SSD in this kind of I/O.

Ofcourse if you look at sequential I/O, the harddrives are not that bad. But sequential I/O is hardly ever a true bottleneck, i would consider IOps performance is much more important.
October 7, 2009 3:20:25 PM



this shows a cheata 300 3.5" as having 2 Mbps and thats a slower drive than a savvio 15k.2 36gb 2.5" ... so lets say my savio gets 3 mbps now multiply that time 40 because thats how many you can buy for a grand... 120mbps... 1.4TB.... vs what... 2 intels at 120Mbps 320GB and over budget by a couple hundred bucks??? what do you think??????
a c 127 G Storage
October 7, 2009 4:01:17 PM

Quote:
so lets say my savio gets 3 mbps now multiply that time 40 because thats how many you can buy for a grand

You can buy 40 harddrives from $1000? How are you going to connect all 40, and 40 spinning disks * 30W = 1200W peak current. Does your budget include a 2000W power supply and 2 or 3 high-end 16-port controllers? If not, how are you going to connect 40 disks within the limits of your budget?

Also, in your screenshot the X25-E is listed, while i'm talking about the X25-M G2, while cheaper its also faster in some respects than its SLC counterpart. Lastly, its 8KB random read tests now, not 4KB random read or even 512 byte random read (the ultimate IOps benchmark). Though all are synthetic, it does show the fundamental weakness of hard disk drives versus solid state storage.

HDDs are no match for a good SSD. Not even 40. Let alone the benefits to Noise, Vibrations, Power Consumption, Reliability, Size/Dimension and the environment.

That said, even 40 of them won't scale to a point where they can beat an SSD. An SSD is too fast for that.

If one disk ha
a c 127 G Storage
October 7, 2009 4:02:30 PM

nevermind the last sentence. :) 
Too bad i still can't edit posts on this forum...
October 7, 2009 4:23:41 PM

ok so,, i did use your x25-m g2 for the calculation, they run about 600 bucks. get 58.5 on a 4kb. a x25-e gets 48 on an 4 kb and 55 on an 8kb so when i calculated for the x25 i gave you the benifit of the dought with 58.5 on 8kb.

savvios only draw 7-8W at full load



thats only a 300 watt psu, witch is smaller than most pc's...

most HP sas contolers sell second had for a hundred bucks or so, that should make up for you over budget on the x25's

other factors were not considered here, just cost to performance your comment about "SSD is too fast for that." is a little ignorant. drive for drive ssd is faster but cost per performance, per gig 15k 2.5" sas disks are better, thats why most enterprise class servers use 15k sas in some level of raid...
October 7, 2009 4:27:57 PM

i also rounded up to 60 when i multiplied for raid, and the total cost of the x25's is 1200 2 hundred over, so adding my hp proliant controlers,,, at about 100 bucks a pice your lookin at the same cost for the same performance, not to mention the sequential read and write advantages of the disks, with more storage....
October 7, 2009 4:30:34 PM

sub mesa said:
Quote:
so lets say my savio gets 3 mbps now multiply that time 40 because thats how many you can buy for a grand

You can buy 40 harddrives from $1000? How are you going to connect all 40, and 40 spinning disks * 30W = 1200W peak current. Does your budget include a 2000W power supply and 2 or 3 high-end 16-port controllers? If not, how are you going to connect 40 disks within the limits of your budget?

Also, in your screenshot the X25-E is listed, while i'm talking about the X25-M G2, while cheaper its also faster in some respects than its SLC counterpart. Lastly, its 8KB random read tests now, not 4KB random read or even 512 byte random read (the ultimate IOps benchmark). Though all are synthetic, it does show the fundamental weakness of hard disk drives versus solid state storage.

If one disk ha


please do some research and calculations before you reply, you wouldnt have had to ask these questions if you did.
a c 127 G Storage
October 7, 2009 5:52:53 PM

Look, i'm here to provide help to users since i come from a world of storage and can often give people advice that will give them more performance for the same amount of invested money. Money is important, and you should compare products based on that. You don't have to take my advice though, but i do believe you will hear the same story from other knowledgeable posters around here and elsewhere.

Your graph displaying power consumption at max throughput, is NOT the maximum power the disk will use. This is done directly when power is switched on and the drive has to spin up. Most 3.5" disks use around 28-35W for spinning up, 2,5" disks use significantly less. While this only lasts for 4 or 5 seconds, this "peak" power consumption will be a problem with many harddrives in a system. With 10 3.5" disks you already need 300-350W for the disks alone. With 40 disks you need 40 * 30 = 1200W. This excludes the power consumption from the rest of the system: motherboard, CPU, graphics card, etc.

To remedy this situation, a feature known as Staggered Spinup was invented, to allow multiple harddrives to spin up with a delay instead of letting them spin-up all at once. Normal SATA controllers do not support this though, but you can find this on most "server-class" PCI-express RAID controllers, like Areca. Using staggered spin-up will allow you to use a less powerful power supply, which will lead to reduced cost and also more efficiency since power supplies are quickly becoming less efficient below 20% load. And its a shame if you just need 200W peaking to 300W while you need a 2000W power supply to spin-up the drives.

If you do use a power supply that's not powerful enough, the system will power down only seconds after you pressed the power button; a safety mechanism inside the power supply called overload protection.

About the 15k SAS disks being battle for server storage, you're totally wrong in that. While SSDs haven't matured as far and as diversely as i'd like, their benefits are exactly what the server storage market demands:

1) reliability (no redundancy using RAID1+ is strictly required; though its always handy to change hardware without downtime)
2) performance (one disk can replace whole arrays in terms of IOps performance)
3) power consumption (basically non-existent; no heat problems at all and reduces cost)
4) lower TCO (Total Cost of Ownership), due to not requiring a RAID controller and only requiring few SSD(s). Also its much less likely you need to replace one of these drives within their service life.

So while capacity is limited, typically this is not the most important to server-based storage. Sure some large databases can be quite large, but in those cases often a large part of those databases is "passive" and not accessed as frequently as other parts. If the "active dataset" is beyond the limits of RAM caching, SSDs could act as additional cache-layer storage for the very large database which ultimately resides on HDDs. The ZFS filesystem is known to be able to use SSD devices as cache storage devices, as i've described here.

About the Intel SSDs: i would generally get the 80GB version, You could pair 4 of them using RAID0 if you like, there's no real risk of any drive failing in the way HDDs frequently fail, unless there is some outside influence. The RAID itself, though, does add an additional point of failure, something which can fail in itself, without the disks being faulty. You should be able to buy 4 Intel X-25M 80GB for the budget of $1000. It would get you a 320GB total capacity, when using RAID0.
October 7, 2009 5:56:33 PM

or even 512 byte random read (the ultimate IOps benchmark). Though all are synthetic, it does show the fundamental weakness of hard disk drives versus solid state storage.
[/quotemsg said:



by synthetic you mean doesn't apply to real life situations? who in their right mind is going to pull so many random 512 byte pices of information that they over load there hdd? i dont think i even have any files on my computer that are that small...

besides the actual folders them selves, the only thing on here that small is log files, super small text files,, crap like that...

im by no means saying SS is not a better technology,, im simply saying that SSD's are not the end all to the performance to dollar to size debate. i easily proved here that 15k disks can compete/win agents ssd's when cost is figured in. granted one ssd vs one 15k disk will win all day long, but when you figure good ssd's cost about 400-600 bucks, and 15k's only 20-40. a fair comparison comes down to my budget is X and i can buy y # of ssds for x or z # of 15k's for x, most of the time y # 15k's will perform the same due to z being exponentally larger that y. and you will always get more storage with 15k's for the same price. there for making 15k's the better choice with performance on par and significantly more storage for the same price.
October 7, 2009 6:02:56 PM

i do agre that the 4x 80gb would be faster i asumed you were talking about the 160gb as that is what you had posted benchies on

as far as the power goes, we are talking server class hp proliant sas cards. so yes they delay the disk spin up
October 7, 2009 6:10:09 PM

MTBF on your x24-m is the same as 15k's and ssd performance degrades over time where 15k's dont.
a c 127 G Storage
October 7, 2009 6:31:03 PM

There is no way any RAID of harddrives can beat four Intel X25-M's in RAID0 using Intel ICHxR RAID drivers with 'Write Caching' enabled. Sure you can get higher MB/s when writing sequentially, for example, but in no realisitic circumstance of common usage will an array of HDDs be faster - within the budget limits you provided.

So if we compare SSD performance for $1000 versus HDD performance for $1000, i'd say SSD wins hands down in IOps performance.

If you compare capacity-per-dollar, SSDs would always loose ofcourse. But that was not what this topic was about; it was about getting the highest performance out of a $1000 budget, and the question whether SSD or a SAS HDD was better for this task. My answer is: an SSD is always better if you look at random I/O performance, which is what server-based storage needs very badly. Desktop users also get benefit by a faster responding system, longer service life and less noise, power consumption and vibrations.

About random I/O and synthetic I/O and benchmark profiles; this falls outside the topic of this discussion. I will say this: using synthetic benchmarking is useful to analyse the limits of the storage device. Since 512-bytes represents 1 sector, its the smallest amount of data that can be read. Actually, on multiples of this number can be read or written to a block-level storage device, such as a HDD or SSD. When performing a 100% random read or write benchmark with 512-byte blocksize, you get to know the limits of the drive. Its basically a seeking-performance test. While both modern SSDs and HDDs are fast in sequential access, the real cornerstone of modern storage is being fast in the non-sequential I/O from many realistic applications and filesystems. Because this access pattern is not predictable, harddrive controllers cannot read it in advance and will have to seek for every I/O that is not contiguous. Because of this unpredictability, this type of I/O is commonly refered to as "random I/O". With benchmark applications you can often set how random the I/O can be, from 0% (100% sequential) to 100% (no two I/O's are contiguous). Alot about I/O benchmarking can be read elsewhere though. But my point was that synthetic numbers are not meaningless. Its a very important number (random read IOps) that's used alot of the server storage industry. They don't talk MB/s there - that's for consumers who don't know any better.
October 7, 2009 6:48:53 PM

sub mesa said:
There is no way any RAID of harddrives can beat four Intel X25-M's in RAID0 using Intel ICHxR RAID drivers with 'Write Caching' enabled. Sure you can get higher MB/s when writing sequentially, for example, but in no realisitic circumstance of common usage will an array of HDDs be faster - within the budget limits you provided.

So if we compare SSD performance for $1000 versus HDD performance for $1000, i'd say SSD wins hands down in IOps performance.



you obvioulsy did not read a thing i said....

i proved x24-m ssd's dont have a longer life,
i proved that an array of 15k can perform on par and have more storage at the same time
i proved that power is not an issue with large disk arrays
and your still hung up on IOps even though my math shows that 15k's can match that...
just because hdisks are old and ssd's are new dosent mean that hdisks cant perform the same when you sped the same amount of money on both.
your still hung up on the fact that ONE ssd is faster than ONE 15k, i already said thats true! but 1 SSD or even 4 is not/not much faster than 40 15k's
40 15k's cost the SAME or LESS than 4 SSD's

and what about the people that dont have $240-290 to drop into their desktop pc's HDD?? what then?
October 7, 2009 7:02:57 PM

you have not shown any data to backup your claim or to disprove mine. when you get the data post it.

untill then...
a c 127 G Storage
October 7, 2009 7:10:25 PM

Dude i'm not going to argue with you about this. If you don't want my help or advice, that's your choice. I don't need to prove anything i do that in business not on these forums.

I would like to see the end-result of your little project though, and some pics of that monstrous beast you put together. Oh and some video + sound will be nice, so we can get to hear how it produces noise too.

Now if someone would donate me $1000, i would send you back a SSD-system which outperforms your 40-disk array. Unlike your system it would sport a sexy and small Mini-ITX casing with just a 200W power supply, wouldn't that be much nicer?
a b G Storage
October 7, 2009 7:14:31 PM

$20-$40?

Nope. Try $200+ for a good new 15k.
October 7, 2009 7:24:26 PM

im going to side with getting the SSDs for three very important reasons: (which i think have not been adressed from what i can see)

1) Physical size, 4 SSDs can fit into the smallest of computer cases easily. however 40 SAS drives will require a much more massive computer case or rackmount, possibly with additional cooling required to cool the drives.

2) Power cunsumption, good SSDs should not use more than like what 1-2W per drive during opperation. while SAS drives use up to say 7W during opperation, so that comes to 40x7=280W each hour. depending if the system is running all the time that can add up to $270 a year in electric bills just to run 40 drives continuously at 7w, 365 days, and 11 cents / 1000w.

3) RAID problems, with a 40 SAS drive array you are asking for major trubble those 40 drives will last you how long untill one or more of them fail? + RMA/shipping costs in getting the 40 drives, i would immagin that atleast 2-3 out of 40 would come dead at your door right off the bat.

additionally did the 40 drives cost include the cost of getting 2-3 rather expensive raid cards? (think i missed that somewhere)
or was this topic just a strict comparison between the performance of just $1000 worth in the drives themselves? (assuming all the other hardware was the same?)
October 7, 2009 7:29:02 PM

cjl said:
$20-$40?

Nope. Try $200+ for a good new 15k.



wrong!

check ebay you can buy a hole box of new Seagate savvio 15k.2 36gb sas disks for 20 bucks apice. as many 1u servers ship with these and they get replaced with larger 146 or 300gb disks before they are put into production. do some shopping and take a couple hundred off your next build!
a c 415 G Storage
October 7, 2009 7:29:32 PM

kerdika said:
i proved that an array of 15k can perform on par and have more storage at the same time
Kerdika, if you live in some a universe where sequential I/O is the most important thing to deal with, then that's just fine. There ARE some applications where it's very important. But for the rest of us random I/Os are a lot more important for making the system quick and responsive.

I've no quibble with your claims as long as you're clear about the circumstances in which they're relevant. But to claim that "your way" is the best way for everyone, or even for most people, is IMHO to either misrepresent the situation or to fail to understand it.
October 7, 2009 7:35:40 PM

paperfox said:
im going to side with getting the SSDs for three very important reasons: (which i think have not been adressed from what i can see)

1) Physical size, 4 SSDs can fit into the smallest of computer cases easily. however 40 SAS drives will require a much more massive computer case or rackmount, possibly with additional cooling required to cool the drives.

2) Power cunsumption, good SSDs should not use more than like what 1-2W per drive during opperation. while SAS drives use up to say 7W during opperation, so that comes to 40x7=280W each hour. depending if the system is running all the time that can add up to $270 a year in electric bills just to run 40 drives continuously at 7w, 365 days, and 11 cents / 1000w.

3) RAID problems, with a 40 SAS drive array you are asking for major trubble those 40 drives will last you how long untill one or more of them fail? + RMA/shipping costs in getting the 40 drives, i would immagin that atleast 2-3 out of 40 would come dead at your door right off the bat.

additionally did the 40 drives cost include the cost of getting 2-3 rather expensive raid cards? (think i missed that somewhere)
or was this topic just a strict comparison between the performance of just $1000 worth in the drives themselves? (assuming all the other hardware was the same?)


you make very good points, this was never intended to say disks are better than ssd and i would probably go with ssd but i just wanted to point out that 15k sas disks can perform on par with ssd's when you figure cost. i did include raid cards, hp proliant cards that have loads of cache are all over ebay second hand these are enterprise class and probably have been replaced due to an implementation of a san. these cards typically go for 100- 200 bucks. drive replacement for failed drives is alot cheaper than ssd because they only cost 20bucks, where a ssd will cost you 280 (at least the x25-m that we discussed) also ssd's are not bullet proof!!! the mtbf is the same for the x25-m as the seagate savvio 15k.2. savvios are also 2.5" drives.

October 7, 2009 9:50:58 PM

sminlal said:
Kerdika, if you live in some a universe where sequential I/O is the most important thing to deal with, then that's just fine. There ARE some applications where it's very important. But for the rest of us random I/Os are a lot more important for making the system quick and responsive.

I've no quibble with your claims as long as you're clear about the circumstances in which they're relevant. But to claim that "your way" is the best way for everyone, or even for most people, is IMHO to either misrepresent the situation or to fail to understand it.


but even in random I/0's the SAS controller knows where the data is stored and can Q the hdd that holds the data, when the next pice of random data (not sequential but still on the array) comes down the line the controler will send the Q to the corresponding hdd (not necessarily the same hdd), because its random the data could be on one disk or on many because fo the raid, when its on many the data can be pulled all at once.


i also would like to say that im not trying to force to any one i just wanted to propose and alternate. and disprove that "15k's are always 100% of the time slower than ssd's" because thats simply not true. in most cases a raid array can be built for the same cost that will perform slightly slower or on par with the ssd and have more space. 10 and 15k's are the happy medium if you will, able to perform but also able to store. and when it comes down to it, your not going to push a 40 disk array or even a 4 ssd array to its full potential in your home pc. and no one should be spending $1000 for a disk in the home pc, this just slows price drops in new tech. but for sake of knowledge i wanted to show the data that it was possible to make a disk perform with a ssd
a c 127 G Storage
October 7, 2009 10:00:12 PM

Kerdika: that's fine but the drive or controller won't get that information until it got back the result from the last I/O operation. So this is serial operation, with a queue depth of just 1.

When writing, this is less a problem as these can be buffered - not written straight away but kept in volatile memory to allow using parallel I/O. But reading is a different point. In a 100% random read 512-byte benchmark with a queue depth of 1, as often used in industry, a 40-disk RAID0 array won't be any better than a single drive, as its latency you're fighting here. The array can only work on one tiny request at a time while having no information about what next requests may come. While in reality some access is sequential, even partly, this is still a major weakness of HDDs and you can't solve it by putting alot of them in RAID0.

Sure you'll get high MB/s, but in a realistic booting test, with a used installation, a bunch of disks in RAID0 still won't boot very fast - and disk are idling 99% of the time even while the storage is the bottleneck. You can only solve this with an SSD - by dropping mechanics overboard and relying only on electronics, the whole gets better.
October 7, 2009 10:29:06 PM

sub mesa said:
Kerdika: that's fine but the drive or controller won't get that information until it got back the result from the last I/O operation. So this is serial operation, with a queue depth of just 1.

When writing, this is less a problem as these can be buffered - not written straight away but kept in volatile memory to allow using parallel I/O. But reading is a different point. In a 100% random read 512-byte benchmark with a queue depth of 1, as often used in industry, a 40-disk RAID0 array won't be any better than a single drive, as its latency you're fighting here. The array can only work on one tiny request at a time while having no information about what next requests may come. While in reality some access is sequential, even partly, this is still a major weakness of HDDs and you can't solve it by putting alot of them in RAID0.

Sure you'll get high MB/s, but in a realistic booting test, with a used installation, a bunch of disks in RAID0 still won't boot very fast - and disk are idling 99% of the time even while the storage is the bottleneck. You can only solve this with an SSD - by dropping mechanics overboard and relying only on electronics, the whole gets better.



i see what your saying, this is true if any two pices of data reside on the same disk, not if they are on multiple disks. if i have to pull twice from the same disk in the array, then yea im gona have to wait 2ms for the disk to find the first pice of data then 2 more ms to find the second, but if one pice is on one disk and the next on another disk then i only have to wait 2ms for both disks to grab the data.
a c 127 G Storage
October 7, 2009 11:05:21 PM

No you do not understand. Please re-read what i've said. The issue is that the drive doesn't know what the next I/O request will be. The application itself may not know yet - as it might depend on the data that's currently being processed and only when that data is known to the application will the next I/O request come into play.

As many applications use blocking I/O, this issue affects alot of desktop software. So your 40-disk array still won't look good here.
a c 415 G Storage
October 8, 2009 5:39:35 AM

kerdika said:
but if one pice is on one disk and the next on another disk then i only have to wait 2ms for both disks to grab the data.
That's only true if the software ask for both pieces of data at the same time. If you have a program which first reads data item one (and it's found on the first disk), and when the software gets that data back it then reads data item two (which is found on the second disk), the two reads do not happen at the same time.

This is the difference in terms of concurrency that I've been trying to drive home.

When you boot a system, you do get some benefit from concurrency, but not nearly enough to offset the vastly faster access time of an SSD.

So while you can build a cheap RAID array that meets or exceeds an SSD in some aspects of performance, you cannot reduce the basic access time. And this is where SSDs really excel, and where a lot (probably most) of the benefit comes from for a typical desktop user.
October 8, 2009 2:58:33 PM

sminlal said:
That's only true if the software ask for both pieces of data at the same time. If you have a program which first reads data item one (and it's found on the first disk), and when the software gets that data back it then reads data item two (which is found on the second disk), the two reads do not happen at the same time.


i c what your saying.

(i have another question for your, im going to post it on the other thread.)
April 15, 2010 2:39:00 PM

The quickest way to end this convo would have been:

sas latency: 2ms
intel x-25m ssd latency: 0.65ms read, 0.85ms write


then of course:
the 4k random performance between the two
the power consumption between the two
and (if you can find it)
the raid 0 performance between the two.

When the SSD's are far and away the faster solution the convo would end. Plus with SAS drive you have to spend 600-700 bucks on a high end sas controller (or 2 if you are wanting 40 [really? 40? just wtf man? its not a data center its a guys home computer] drives in it) to get the kind of performance hes talking about and even then it wont come close to the random read/write performance of the ssd. Why? electrons traveling through a wire much faster than a mechanical arm on a 15,000 rpm disk can access data. Longevity and capacity are the only places where the sas/sata disks win over ssd's and thats because the technology for those have been in development for 60 years while SSDs have only been around for about 10.
April 15, 2010 3:46:55 PM

i gave up on this along time ago, but your wrong on the controllers. you can get HP enterprise class raid controllers with battery backed write cache and raid 0-6 that can handle 30+ drives for around a hundred bucks on e bay. you would have to be an idiot to buy one for more than $200

this was just to show that similar performance could be acheved for the same setup cost.
!