Sign in with
Sign up | Sign in
Your question
Solved

SSD vs SAS Beating A Dead horse...

Last response: in Storage
Share
October 6, 2009 6:01:54 PM

OK, so Im probably beating a dead horse here but...

SSD or SAS?

not just performance, im talking every thing: random reads/writes, sequential read/writes, COST, Scalability, Longevity.

here is why i bring this up:

i have 2x Seagate Savvio 2.5" 15k 36gb SAS drives in raid 0 i paid $20 apice for the hdd's and my board has on board SAS Raid. i would like to know how this compares to a SSD setup in all the categories listed above.
a b G Storage
October 6, 2009 6:17:14 PM

SAS or SATA is not a matter of speed, latency... (or anything performance), they're just two different protocols serving different purpose.

Quote:
i have 2x Seagate Savvio 2.5" 15k 36gb SAS drives in raid 0 i paid $20 apice for the hdd's and my board has on board SAS Raid. i would like to know how this compares to a SSD setup in all the categories listed above.

You probably won't notice a major speed difference unless you attempt to run two or more disk heavy activities at the same time at which point the performance of SSD flies way past ANY HDD. That's the main advantage of SSD.
If a SSD is treated with idealistic/traditional HDD usage patterns then you won't see much benefit over 15k rpm drives.

If utilised in a server (mainly database) then it's a whole different story.

[FIXED]
m
0
l
October 6, 2009 6:46:41 PM

SAS does make a difference over SATA although that was not what i had asked.
SAS controllers are intelligent therefore producing more throughput and offloading work from the cpu/northbridge. and they do not serve different purposes, the are both technologies for connecting storage devices,just different ways of doing it.

also remember that i asked about price.. my two drives cost 1/2 the cost of a single 36gb SSD so i could affectively have 4x 15k sas drives in raid 0 vs 1 36gb SSD for the same price. allowing me to pull 4 pieces of data at once while you can only do one.... if im not right by this statement the correct me, but through SAS Multi path I/O and Tagged command Qing im able to "multi task" my disks.

Define idealistic/traditional... my system is used for heavy multitasking and gaming.
m
0
l
Related resources
a c 127 G Storage
October 6, 2009 6:49:33 PM

SAS can have higher IOps performance because it has a deeper queue depth, SATA does support NCQ (mandatory in SATA 3Gbps spec) but not as deep a queue as SAS supports.

Other than that, an interface is simply an interface; the cable through which the data goes. It doesn't tell anything about the actual storage device. In theory you could have a wonderfully fast floppy drive attached to that SAS connector. :) 

Sadly, 15.000 rpm SAS drives actually perfect quite bad for desktop applications. Their firmware is totally focused on IOps with a multiple queue depth; when 10 or even 1000 I/O requests are waiting in the queue and can be sent in groups and processed by the SAS drive in the most efficient way.

Because of this, a 10.000 rpm Velociraptor often performs better than 15.000 rpm SAS for desktop systems. Either SAS or SATA HDD are inferior in terms of performance when you consider an SSD. Comparing an SSD to HDD in terms of random I/O performance is like comparing a snail to an aeroplane. An aeroplane without wings i might add. :) 
m
0
l
October 6, 2009 7:01:51 PM

sub mesa said:
SAS can have higher IOps performance because it has a deeper queue depth, SATA does support NCQ (mandatory in SATA 3Gbps spec) but not as deep a queue as SAS supports.

Other than that, an interface is simply an interface; the cable through which the data goes. It doesn't tell anything about the actual storage device. In theory you could have a wonderfully fast floppy drive attached to that SAS connector. :) 


Right!, thats what i was getting at with this:
"SAS does make a difference over SATA although that was not what i had asked."

sub mesa said:
Their firmware is totally focused on IOps with a multiple queue depth; when 10 or even 1000 I/O requests are waiting in the queue and can be sent in groups and processed by the SAS drive in the most efficient way.


Again right on the head,, with the addition of multiple drives load is now divided, scaling performance exponentially with every disk. comparing cost of disk's you can get 4 15k sas's for the price of one ssd. i want to know how performance is when you compare 4 15k sas to one ssd since price is the same. and im going on the cheaper "main stream" ($100) ssd's not the high performance disks ($500+) otherwise we would be talking 20 sas vs 1 ssd :p 
m
0
l
a b G Storage
October 6, 2009 9:15:14 PM

kerdika said:
Define idealistic/traditional... my system is used for heavy multitasking and gaming.


Letting one disk activity to complete before doing another task, that's what been set into the mind of most PC users thanks to HDD. So multitasking should in most cases benefit from SSD if there's concurrent disk activities going on. Most PC users would not dare to load say Photoshop, WinRAR compressing and something else all at the same time with a HDD (or a few 15k rpm in striped RAID for that matter).
It's similar to servers requiring high disk IOps, but scaled down for PC usage. (Where SSD utterly dominates in database servers)
m
0
l
October 7, 2009 1:37:42 PM

wuzy said:
Letting one disk activity to complete before doing another task, that's what been set into the mind of most PC users thanks to HDD. So multitasking should in most cases benefit from SSD if there's concurrent disk activities going on. Most PC users would not dare to load say Photoshop, WinRAR compressing and something else all at the same time with a HDD (or a few 15k rpm in striped RAID for that matter).
It's similar to servers requiring high disk IOps, but scaled down for PC usage. (Where SSD utterly dominates in database servers)


how can an SSD access multiple things at once if it can only send one thing down the serial cable? likewise goes for traditional disks but i can buy multiple 15k's for the price of a single SSD. with multiple disks i have multiple serial cables to transfer data on and multiple disks to read data from.....
m
0
l
a c 415 G Storage
October 7, 2009 7:12:17 PM

kerdika said:
with the addition of multiple drives load is now divided, scaling performance exponentially with every disk.
What scales is transfer rate and concurrent I/Os. Having a multi-drive raid set does not scale access time. It takes just as long for 10-disk RAID set to respond to an I/O request as it does for a single drive. This is why SSDs are so very much better the kind of small random I/O loads typically seen by most users on a desktop PC - they can complete a small, random I/O request in about 1/100th the time of even the fastest hard drive, whether it's an individual drive or a drive in a massive RAID array.


Repeat after me - for individual drives:

SSDs are way, way better for random I/O.
SSDs and hard drives are not that far apart in terms of sequential performance.
Hard drives are way, way better for cost per capacity.


For RAID arrays (of HDDs or SSDs):

RAID can improve sequential transfer rates
RAID can improve the overall throughput for concurrent I/Os
RAID cannot improve access times.
m
0
l
October 7, 2009 7:25:29 PM

access time is only 2ms on 15k i realize ssd is faster but, with multiple drives each drive can pull one pice of the random data. granted it takes 10 drive 2ms to pull a pice of data but if each drive is pulling the next pice in line, it ends up being 2ms for 10 requests where ssd is going to have .002ms or something like that per 1 request so the ssd has to read 10 times where 10 15k sas disks all have to read once
m
0
l
a c 415 G Storage
October 7, 2009 7:34:11 PM

kerdika said:
access time is only 2ms on 15k i realize ssd is faster but, with multiple drives each drive can pull one pice of the random data. granted it takes 10 drive 2ms to pull a pice of data but if each drive is pulling the next pice in line, it ends up being 2ms for 10 requests where ssd is going to have .002ms or something like that per 1 request so the ssd has to read 10 times where 10 15k sas disks all have to read once
You're talking about concurrent I/O. If you're loading an application, and if the application needs to load DLL 1, then DLL 2, then access INI file 1, then read some stuff from the registery, etc. etc. - that is not concurrent I/O. An SSD will be able to zip through those tasks about 100X faster than any HDD-based RAID array. That's why random I/O performance is so important.
m
0
l
a c 127 G Storage
October 7, 2009 7:36:00 PM

kerdika said:
how can an SSD access multiple things at once if it can only send one thing down the serial cable? likewise goes for traditional disks but i can buy multiple 15k's for the price of a single SSD. with multiple disks i have multiple serial cables to transfer data on and multiple disks to read data from.....

Firstly, Serial ATA communication is bi-directional and full-duplex, meaning the operating system can send requests, while at the same time receiving information. Communicating with the electronical components goes at near-wire speed; 300MB/s but its best expressed in latency (ns).

Second, modern SSDs and all still produced HDDs have DRAM memory chips onboard, which act as quick but volatile memory to store requests and use as internal "RAM" for internal storage purposes. This memory is very fast, like your own RAM. Without this, both SSDs and HDDs would be very slow. They would indeed have to process things in serial order without the ability to continue working straight after completition of an I/O. So in order to keep that actuator busy as much as possible, you need some form of buffering and queueing. That's why operating systems can send multiple requests and harddrives lie about data being written while in fact its still in their DRAM. They do this so they can receive the next request already, while in the mean time writing that data to the actual mechanical storage device. So a HDD actually has an electronic side and mechanical side.

Because of the nature of SSDs, a completely new or heavily revised protocol should come into play, to allow operating systems to exploit the benefits of flash storage, and also to work around the weaknesses. While many work in the past in terms of optimizing I/O was to make things more sequential and reduce seek time. While that works great for HDDs, SSDs have no benefit of this strategy and it may even cause lower powerformance than without the optimizations. With a new extension coming into play, SSDs could have a very deep queue of requests that it has to handle, and process them in the most parallel way possible. Because flash cells work independently from eachother, the possibility for endless parallellization is present.

Honestly, this is like the importance of Fusion power to the energy crisis but for the storage industry. Harddrives just aren't very good storage mediums, and flash based storage, if affordable, has many benefits both theoretical and practical. The race now will be who designs new flash controllers utilizing more parallel processing potential, or perhaps even a new interface to queue I/O's more efficiently. Currently SSDs like Intel use NCQ to queue I/O requests, while this was originally meant to reduce seek times on mechanical storage. :) 
m
0
l
a b G Storage
October 7, 2009 7:39:04 PM

kerdika said:
access time is only 2ms on 15k

More like 5ms.
m
0
l
a c 127 G Storage
October 7, 2009 7:44:28 PM

sminlal: i agree in all you said. But there is a minor exception: RAID1's could be used to reduce seek times slightly.

The split I/O strategy, found in BSD geom_mirror software RAID1 driver, sends the same I/O request to both disks. Who ever responds quicker 'wins'. The benefit here is a 50% cut on the rotational delay. At any point in time two disks have different head positioning and their rotations aren't evenly matched. So if you send an I/O at that time one of them will be faster. This way you can shave off some of the total service time of the I/O request, but it's not that much. The new 'load' algoritm works very well to increase IOps of RAID1, which is quite uncommon since most RAID1's perform like a single disk.
m
0
l
October 7, 2009 7:44:30 PM

http://www.tomshardware.co.uk/forum/page-244279_14_0.ht...

disk for disk ssd is faster but dollar for dollar they are close to the same.....

one x25-m 80gb cost $280 and has RI/O rates of about 58mbps*
one savvio 15k.2 36gb cost $20 and has an RI/0 rate of about 3mbps*
14 savvio 15k.2 36gb cost $280 and has a RI/O rate of about 42mbps**

its important to note that the 14 savvios will have 500 gb of storage vs the ssd with only 80

*8kb http://images.anandtech.com/graphs/ssdfortheenterprise_... and http://images.anandtech.com/graphs/intelx25mg2perfprevi...
** calculated

m
0
l
a c 415 G Storage
October 7, 2009 7:50:14 PM

sub mesa said:
The split I/O strategy, found in BSD geom_mirror software RAID1 driver, sends the same I/O request to both disks.
Yes, of course you are absolutely correct. But as you said the gains are essentially negligible with respect to the access time of an SSD.

Interestingly, that strategy is probably useless with SSDs, since both should respond in the same amount of time (if they're both mirrored drives in a RAID 1 set then the internal allocation of blocks ought to, in theory at least, be identical).
m
0
l
October 7, 2009 7:51:15 PM

IM NOT DISPUTING THE SUPERIORITY OF SSD just ist cost and an alternative method of similar performance
m
0
l
a c 415 G Storage
October 7, 2009 7:52:47 PM

kerdika said:
14 savvio 15k.2 36gb cost $280 and has a RI/O rate of about 42mbps**
And again, that I/O rate is only obtainable when you have enough concurrent I/Os to keep all the drives busy.

Arguing about disk performance is like arguing about the economy - it totally depends on which factors YOU consider important vs. those which your opponent considers important... :??: 
m
0
l
a c 127 G Storage
October 7, 2009 8:01:41 PM

Also note that the new X25-M G2 is better at random read than the SLC-based X25-E. Any benchmarks performed at the release of the X25-M are outdated because those still had firmware issues with preventing performance degradation due to oversized mapping clusters. This got fixed in the new firmware and the G2 adds even higher random read and even lower read latency than the SLC product. The SLC product has life expectancy benefits the MLC flash products can't achieve, though. But with Intel's adanced wear leveling this shouldn't be a problem but most if not all practical uses.
m
0
l
a b G Storage
October 7, 2009 8:06:17 PM

kerdika said:
IM NOT DISPUTING THE SUPERIORITY OF SSD just ist cost and an alternative method of similar performance



As far as cost is concerned, go with the SSD's. SAS still requires the controller, which i know you have, but if something changes, the SSD's will work with any SATA controller out there. The technology is better and I understand that you can take a gazillion SAS drives and balance out performance of 1 SSD drive, but don't do that. The cost is there, but you're better off stepping up to SSD's.
m
0
l
October 7, 2009 9:43:12 PM

cirdecus said:
As far as cost is concerned, go with the SSD's. SAS still requires the controller, which i know you have, but if something changes, the SSD's will work with any SATA controller out there. The technology is better and I understand that you can take a gazillion SAS drives and balance out performance of 1 SSD drive, but don't do that. The cost is there, but you're better off stepping up to SSD's.


here are my two big problems with the current ssd's:

they are not SAS witch is a much more efficent way of doing things

and the are too expensive to consider, shure there are the cheap ssd's out there but my raid 0 15k's walk all over those. the good ones (intel and such) are just to expensive for me to justify.

i originally posted this compare my 2 15k savvio's to a "cheap" ($100) ssd.
m
0
l
a b G Storage
October 7, 2009 10:30:58 PM

Good lawd, your posts give me headache...

Quote:
they are not SAS witch is a much more efficent way of doing things

The advantage SAS has over SATA mainly relates to SAN environments which I'm not going to list each and everyone of them. As I've said before SAS vs. SATA (just the interface, not drives) has nothing to do with performance.

Quote:
and the are too expensive to consider, shure there are the cheap ssd's out there but my raid 0 15k's walk all over those. the good ones (intel and such) are just to expensive for me to justify.

MLC-Indilinx based SSD are good alternatives to X25-M and frequently approaches $2.6/GB or less if you spot a good deal on website like Slickdeals. You obviously have not seen the data of SSD performance (either in server or desktop work patterns) and blatantly spew useless BS. The long comment I made below which composed data for database, webserver and single-user usage when comparing 15k HDD to SSD.

As said before, if you tried something like virus scanning, WinRAR compression and loading up a large app all at the same time, even your RAID0 15k array will crumple in performance, a single SSD will not.

FYI, as a sys admin part of my job is to build different storage arrays for different purposes whether it's for our workstations or servers. Analyzing data cognitively is what I thrive at.

Lastly, learn how to spell and punctuate properly.

I'm not going to waste any more time in this thread and loss more brain cells to your lunatic arguments.
m
0
l
October 7, 2009 10:38:16 PM

wuzy said:
Good lawd, your posts give me headache...

Quote:
they are not SAS witch is a much more efficent way of doing things

The advantage SAS has over SATA mainly relates to SAN environments which I'm not going to list each and everyone of them. As I've said before SAS vs. SATA (just the interface, not drives) has nothing to do with performance.

Quote:
and the are too expensive to consider, shure there are the cheap ssd's out there but my raid 0 15k's walk all over those. the good ones (intel and such) are just to expensive for me to justify.

MLC-Indilinx based SSD are good alternatives to X25-M and frequently approaches $2.6/GB or less if you spot a good deal on website like Slickdeals. You obviously have not seen the data of SSD performance (either in server or desktop work patterns) and blatantly spew useless BS. The long comment I made below which composed data for database, webserver and single-user usage when comparing 15k HDD to SSD.

As said before, if you tried something like virus scanning, WinRAR compression and loading up a large app all at the same time, even your RAID0 15k array will crumple in performance, a single SSD will not.

Lastly, learn how to spell and punctuate properly.

I'm not going to waste any more time in this thread and loss more brain cells to your lunatic arguments.


Wow your are a dick.

is that good spelling and punctuating? did it ever cross your mind that im here to learn as well as explore avenues no one has before?

280 bucks or any thing over 100 is ridiculous for a disk. as for multitasking, i can winrar and playgames/open pdfs, all at once and i dont have to wait, for that matter i peak my cpu before i can load up the disk.

MY DISK ARRAY COST ME $40!!! go buy an ssd with the same performance for that....
m
0
l
a b G Storage
October 7, 2009 11:08:21 PM

Lots of good thoughts above: here are my 2 cents:

A LOT depends on the nature of the workload you intend
to give your storage subsystem.

Because the technology is rather mature, particularly
with the widespread availability of perpendicular magnetic
recording and the largest caches e.g. 64 MB per HDD,
a multi-drive RAID effectively multiplies that cache
for RAID-0 arrays: e.g. 4 x HDDs @ 64MB cache = 256 MB cache.

Given current SSD prices, it's hard to beat rotating platters
in terms of cost per gigabyte -AND- in terms of performance
that is acceptable for SOME workloads -- but NOT ALL workloads.

If fail-safe redundancy is ABSOLUTELY NECESSARY, then of course
a RAID-0 is the WRONG WAY.

A LOT also depends on the power of your RAID controller(s):
a hardware RAID controller with a very large on-board cache,
particularly one that uses an x8 lane PCI-E 2.0 interface,
is a must for any serious storage subsystem.

We've been looking at storage technology for a long time,
e.g. ever since super minicomputer days, and it is clear
there is an ever widening performance gap between CPUs and RAM,
on the one hand, and rotating platters, on the other hand.

Rotating disk drives are getting much larger
without getting much faster, in general.

Even THE fastest SAS/6G HDDs at 15,000 rpm cannot approach
the extremely low access times of modern SSDs.


We have decided to wait, and our next purchase will
probably be a SAS/6G controller, like Intel's RS2BL080
or RS2BL040:

http://www.intel.com/Products/Server/RAID-controllers/R...

Then, as soon as SATA/6G SSDs become more widely available,
we expect that the prices of SATA/3G SSDs will fall,
giving us more options to choose from, and also
it's more likely that SSD technology will have matured
even more by then.

For example, Intel's Matrix Storage Technology
ICH10R does not (yet) support the TRIM function,
to my knowledge. (Please someone correct me on
this point, if I am in error.)


So, a safe approach, for now, is to invest in a powerful PCI-E
SAS/6G RAID controller and wire it to fast 15,000 rpm
SAS/6G HDDs like Seagate's Savvio 15K.2:

http://www.seagate.com/www/en-us/products/servers/savvi...

SATA/6G SSDs should be plug-compatible, by the
time they do become more widely available.
SATA/3G SSDs are already pushing the 300MB/sec. limit
of the SATA-II interface, so it's reasonable to expect
SATA/6G SSDs to exceed 300MB/second.


We also just installed 2 of these Enhance Tech X14's,
in anticipation of migrating to the 2.5" form factor:

http://www.newegg.com/Product/Product.aspx?Item=N82E168...


The X14 is much better built that Athena's comparable SATA backplane
unit, which suffers from a flimsy SATA port connection to the PCB
(breaks loose when a tight SATA connector is pulled out). See
Customer Reviews of the latter here:

http://www.newegg.com/Product/Product.aspx?Item=N82E168...

This also happened to us:

"Cons: After one insertion and removal of a SATA cable into one of the SATA sockets a simple removal of the cable has ripped the socket right off the printed circuit board, leaving the socket with solder covered leads dangling at the end of the SATA cable."

So, we recommended the X14 to Newegg, and I'm happy
to say that they agreed with our recommendation,
even though the X14 is much more expensive.


MRFS
m
0
l
a c 415 G Storage
October 8, 2009 5:46:16 AM

kerdika said:
they are too expensive to consider
If you could truly duplicate the performance of an SSD with a cheaper alternative, then I would agree. But you can't, as has been explained in several posts in two threads. You can improve SOME aspects of performance, but not ALL of them.

If your workload needs the kind of performance that SSDs excel at, then only you can determine if the extra cost is worth while. They may still be too expensive for YOU, but that certainly doesn't mean they're too expensive for everyone.
m
0
l
October 8, 2009 3:03:38 PM

i understand about the access times now...


back to my original question,, how does my raid 0 savvio 15k.2 compare to the $100 ssd's?
m
0
l

Best solution

a c 415 G Storage
October 8, 2009 5:08:53 PM

kerdika said:
back to my original question,, how does my raid 0 savvio 15k.2 compare to the $100 ssd's?
...well, I think that's what we've been trying to tell you, but to summarize:

You're original quesiton was: random reads/writes, sequential read/writes, COST, Scalability, Longevity.


Random read/write - an SSD is still way faster than a 2-disk RAID 0 set of 15K rpm drives. For many people this is the most important performance metric and the reason why they're willing to pay extra for an SSD.

Sequential read/writes - the 2-disk RAID 0 should be faster, especially for writes. This can be very important in certain applications such as video editing or copying large files.

Cost - the hard drives will be cheaper on a $/GB basis, perhaps not so on a $/performance basis, depending on which performance metric you use.

Scalability - if by this you mean the ability to expand the size or improve the performance, I think it's a draw since you can put SSDs into RAID just as easily as hard drives. The difference is cost, not scalability. Hard drives have the edge for capacity.

Longevity - I don't think the stats are in yet for SSD reliability over, say, a 5-year span. It may be kind of a wash - SSDs have a known write cycle limitation, but mechanical drives have known mechanical issues. The relative reliability between the two probably depends a lot on the environment and workload they're used with.
Share
October 20, 2009 5:09:27 PM

I was looking into the SAS/SSD offerings, and actually tried both.

SAS using 15K drives ($150 73GB) and a quality SAS RAID controller with 512MB Cache or more ($1000) is a better solution all around than current SSD offerings. With a good SAS RAID controller you will not notice any differences in access times over SSD.

This may change over the next couple of years in favor of SSD, but as of right now SAS + good SAS RAID controller is the better solution and doesn't compromise on any applicaiton (games to video).

One BIG disadvantage to SSD is CPU utilization. Now a good RAID controller with full SSD optimization might come out ahead (haven't verified), but that pushes the cost higher than SAS and far less capacity potential.

But SSD is quiet compared to 15K SAS drives.

My hunch is SSD will evolve, but for now SAS + quality SAS RAID controller is now in one of my systems.
m
0
l
October 20, 2009 6:03:51 PM

you paid way to much for you 15k.... and your controler....
m
0
l
a c 415 G Storage
October 21, 2009 12:57:02 AM

V8VENOM said:
With a good SAS RAID controller you will not notice any differences in access times over SSD.
I'm finding that pretty hard to believe - are you really talking about access time (i.e., latency) and not transfer rate or concurrent I/Os?

Even a 15K drive (SAS or otherwise) is going to have a hard time getting access times less than 2ms, and a RAID array won't change that much, if at all. Intel's X25M drive, on the other hand, has a worst-case access time of 0.085ms, at least 25 times faster.

Which SSDs did you test with? I'd be more inclined to believe you if it was one of those older models with the JMicron controller...
m
0
l
October 21, 2009 3:45:48 PM

ok, here is how a 15k will read 3 random pices of data:

2ms, data1, 2ms, data2, 2ms, data3

for a total of 6ms and data read times,

now here is a ssd doing the same thing

.08ms, data1, .08ms, data2, .08ms, data3

for a total of .24ms plus read times.

the ssd reads all three peaces of data before the 15k can even think about reading the first....

although i love my 15k's they just cant beat that kinda speed....
m
0
l
April 2, 2012 6:22:20 AM

sub mesa said:
Firstly, Serial ATA communication is bi-directional and full-duplex, meaning the operating system can send requests, while at the same time receiving information. Communicating with the electronical components goes at near-wire speed; 300MB/s but its best expressed in latency (ns).

Second, modern SSDs and all still produced HDDs have DRAM memory chips onboard, which act as quick but volatile memory to store requests and use as internal "RAM" for internal storage purposes. This memory is very fast, like your own RAM. Without this, both SSDs and HDDs would be very slow. They would indeed have to process things in serial order without the ability to continue working straight after completition of an I/O. So in order to keep that actuator busy as much as possible, you need some form of buffering and queueing. That's why operating systems can send multiple requests and harddrives lie about data being written while in fact its still in their DRAM. They do this so they can receive the next request already, while in the mean time writing that data to the actual mechanical storage device. So a HDD actually has an electronic side and mechanical side.

Because of the nature of SSDs, a completely new or heavily revised protocol should come into play, to allow operating systems to exploit the benefits of flash storage, and also to work around the weaknesses. While many work in the past in terms of optimizing I/O was to make things more sequential and reduce seek time. While that works great for HDDs, SSDs have no benefit of this strategy and it may even cause lower powerformance than without the optimizations. With a new extension coming into play, SSDs could have a very deep queue of requests that it has to handle, and process them in the most parallel way possible. Because flash cells work independently from eachother, the possibility for endless parallellization is present.

Honestly, this is like the importance of Fusion power to the energy crisis but for the storage industry. Harddrives just aren't very good storage mediums, and flash based storage, if affordable, has many benefits both theoretical and practical. The race now will be who designs new flash controllers utilizing more parallel processing potential, or perhaps even a new interface to queue I/O's more efficiently. Currently SSDs like Intel use NCQ to queue I/O requests, while this was originally meant to reduce seek times on mechanical storage. :) 




Actually SATA only operate in a half duplex mode, SAS is full duplex
m
0
l
April 2, 2012 6:27:43 AM

kerdika said:
http://www.tomshardware.co.uk/forum/page-244279_14_0.ht...

disk for disk ssd is faster but dollar for dollar they are close to the same.....

one x25-m 80gb cost $280 and has RI/O rates of about 58mbps*
one savvio 15k.2 36gb cost $20 and has an RI/0 rate of about 3mbps*
14 savvio 15k.2 36gb cost $280 and has a RI/O rate of about 42mbps**

its important to note that the 14 savvios will have 500 gb of storage vs the ssd with only 80

*8kb http://images.anandtech.com/graphs/ssdfortheenterprise_... and http://images.anandtech.com/graphs/intelx25mg2perfprevi...
** calculated



Not entirely true, but almost, depends on what mode of operation you are looking at

ssd with a .5 ms access versus a raid 15 K sas drive 5 ms, or random r/w or sequential r/w

A multi drive sas array will exceed the io of a single ssd. I personally did benchmarks and found the sas array better or equal a single ssd.

It would be interesting to do a new benchmark against 2 ssd and 2 sas drive in a raid 0 and raid 1 config.
m
0
l
April 2, 2012 6:32:37 AM

kerdika said:
http://www.tomshardware.co.uk/forum/page-244279_14_0.ht...

disk for disk ssd is faster but dollar for dollar they are close to the same.....

one x25-m 80gb cost $280 and has RI/O rates of about 58mbps*
one savvio 15k.2 36gb cost $20 and has an RI/0 rate of about 3mbps*
14 savvio 15k.2 36gb cost $280 and has a RI/O rate of about 42mbps**

its important to note that the 14 savvios will have 500 gb of storage vs the ssd with only 80

*8kb http://images.anandtech.com/graphs/ssdfortheenterprise_... and http://images.anandtech.com/graphs/intelx25mg2perfprevi...
** calculated




I wonder how you calculated your sas numbers, My single sas 73 gb fujitsu may2073 has a read rate of 75 mb/s while my ssd patr. torq II 35 gb has a read rate of around 260, but my 2 sas mba 3076 in a raid 0 had a read read of 255 mb/sec
m
0
l
August 2, 2012 12:15:59 AM

I'm jumping in here to correct a misconception about the differences between SATA and SAS.

SATA is NOT full duplex. It is half-duplex. A SATA H2D FIS is not on the wire at the same time as a D2H FIS. At any instant in time a FIS can be in transit in one direction or the other, but not both.

SAS is full duplex. The Tx and Rx side of a SAS connection are almost fully independent. (RRDY flow control primitives are the exception.) The SAS Initiator can be transmitting a Command, Write Data, or Task Management frame to the Target at the same time the Target is sending a Read Data, XFER_RDY, or Status frame from a completely different command to the Initiator. This is one of the things that gives SAS a theoretical advantage over SAS. Dual port active-active being another.

Having said that, everything is implementation dependent. A poorly implemented SAS Target may not perform as well as a well implemented SATA Device.


m
0
l
March 18, 2013 7:24:30 PM

kerdika said:
OK, so Im probably beating a dead horse here but...

SSD or SAS?

not just performance, im talking every thing: random reads/writes, sequential read/writes, COST, Scalability, Longevity.

here is why i bring this up:

i have 2x Seagate Savvio 2.5" 15k 36gb SAS drives in raid 0 i paid $20 apice for the hdd's and my board has on board SAS Raid. i would like to know how this compares to a SSD setup in all the categories listed above.


I'm using 4 seagate 74 g drives in a raid 5 connected to a HP/Compaq SA400 series controller. So far (2 yrs- 24x7) it's been rock solid and reliable. I briefly tried ssd but ran into early drive failures.


Performance wise the 4 drive array meets a ssd in every way except for the 2 ms access time.

m
0
l
!