SSD vs SAS Beating A Dead horse...

kerdika

Distinguished
Sep 22, 2009
101
0
18,690
OK, so Im probably beating a dead horse here but...

SSD or SAS?

not just performance, im talking every thing: random reads/writes, sequential read/writes, COST, Scalability, Longevity.

here is why i bring this up:

i have 2x Seagate Savvio 2.5" 15k 36gb SAS drives in raid 0 i paid $20 apice for the hdd's and my board has on board SAS Raid. i would like to know how this compares to a SSD setup in all the categories listed above.
 
Solution
...well, I think that's what we've been trying to tell you, but to summarize:

You're original quesiton was: random reads/writes, sequential read/writes, COST, Scalability, Longevity.


Random read/write - an SSD is still way faster than a 2-disk RAID 0 set of 15K rpm drives. For many people this is the most important performance metric and the reason why they're willing to pay extra for an SSD.

Sequential read/writes - the 2-disk RAID 0 should be faster, especially for writes. This can be very important in certain applications such as video editing or copying large files.

Cost - the hard drives will be cheaper on a $/GB basis, perhaps...

wuzy

Distinguished
Jun 1, 2009
900
0
19,010
SAS or SATA is not a matter of speed, latency... (or anything performance), they're just two different protocols serving different purpose.

i have 2x Seagate Savvio 2.5" 15k 36gb SAS drives in raid 0 i paid $20 apice for the hdd's and my board has on board SAS Raid. i would like to know how this compares to a SSD setup in all the categories listed above.
You probably won't notice a major speed difference unless you attempt to run two or more disk heavy activities at the same time at which point the performance of SSD flies way past ANY HDD. That's the main advantage of SSD.
If a SSD is treated with idealistic/traditional HDD usage patterns then you won't see much benefit over 15k rpm drives.

If utilised in a server (mainly database) then it's a whole different story.

[FIXED]
 

kerdika

Distinguished
Sep 22, 2009
101
0
18,690
SAS does make a difference over SATA although that was not what i had asked.
SAS controllers are intelligent therefore producing more throughput and offloading work from the cpu/northbridge. and they do not serve different purposes, the are both technologies for connecting storage devices,just different ways of doing it.

also remember that i asked about price.. my two drives cost 1/2 the cost of a single 36gb SSD so i could affectively have 4x 15k sas drives in raid 0 vs 1 36gb SSD for the same price. allowing me to pull 4 pieces of data at once while you can only do one.... if im not right by this statement the correct me, but through SAS Multi path I/O and Tagged command Qing im able to "multi task" my disks.

Define idealistic/traditional... my system is used for heavy multitasking and gaming.
 

sub mesa

Distinguished
SAS can have higher IOps performance because it has a deeper queue depth, SATA does support NCQ (mandatory in SATA 3Gbps spec) but not as deep a queue as SAS supports.

Other than that, an interface is simply an interface; the cable through which the data goes. It doesn't tell anything about the actual storage device. In theory you could have a wonderfully fast floppy drive attached to that SAS connector. :)

Sadly, 15.000 rpm SAS drives actually perfect quite bad for desktop applications. Their firmware is totally focused on IOps with a multiple queue depth; when 10 or even 1000 I/O requests are waiting in the queue and can be sent in groups and processed by the SAS drive in the most efficient way.

Because of this, a 10.000 rpm Velociraptor often performs better than 15.000 rpm SAS for desktop systems. Either SAS or SATA HDD are inferior in terms of performance when you consider an SSD. Comparing an SSD to HDD in terms of random I/O performance is like comparing a snail to an aeroplane. An aeroplane without wings i might add. :)
 

kerdika

Distinguished
Sep 22, 2009
101
0
18,690


Right!, thats what i was getting at with this:
"SAS does make a difference over SATA although that was not what i had asked."



Again right on the head,, with the addition of multiple drives load is now divided, scaling performance exponentially with every disk. comparing cost of disk's you can get 4 15k sas's for the price of one ssd. i want to know how performance is when you compare 4 15k sas to one ssd since price is the same. and im going on the cheaper "main stream" ($100) ssd's not the high performance disks ($500+) otherwise we would be talking 20 sas vs 1 ssd :p
 

wuzy

Distinguished
Jun 1, 2009
900
0
19,010


Letting one disk activity to complete before doing another task, that's what been set into the mind of most PC users thanks to HDD. So multitasking should in most cases benefit from SSD if there's concurrent disk activities going on. Most PC users would not dare to load say Photoshop, WinRAR compressing and something else all at the same time with a HDD (or a few 15k rpm in striped RAID for that matter).
It's similar to servers requiring high disk IOps, but scaled down for PC usage. (Where SSD utterly dominates in database servers)
 

kerdika

Distinguished
Sep 22, 2009
101
0
18,690


how can an SSD access multiple things at once if it can only send one thing down the serial cable? likewise goes for traditional disks but i can buy multiple 15k's for the price of a single SSD. with multiple disks i have multiple serial cables to transfer data on and multiple disks to read data from.....
 
What scales is transfer rate and concurrent I/Os. Having a multi-drive raid set does not scale access time. It takes just as long for 10-disk RAID set to respond to an I/O request as it does for a single drive. This is why SSDs are so very much better the kind of small random I/O loads typically seen by most users on a desktop PC - they can complete a small, random I/O request in about 1/100th the time of even the fastest hard drive, whether it's an individual drive or a drive in a massive RAID array.


Repeat after me - for individual drives:

SSDs are way, way better for random I/O.
SSDs and hard drives are not that far apart in terms of sequential performance.
Hard drives are way, way better for cost per capacity.


For RAID arrays (of HDDs or SSDs):

RAID can improve sequential transfer rates
RAID can improve the overall throughput for concurrent I/Os
RAID cannot improve access times.
 

kerdika

Distinguished
Sep 22, 2009
101
0
18,690
access time is only 2ms on 15k i realize ssd is faster but, with multiple drives each drive can pull one pice of the random data. granted it takes 10 drive 2ms to pull a pice of data but if each drive is pulling the next pice in line, it ends up being 2ms for 10 requests where ssd is going to have .002ms or something like that per 1 request so the ssd has to read 10 times where 10 15k sas disks all have to read once
 
You're talking about concurrent I/O. If you're loading an application, and if the application needs to load DLL 1, then DLL 2, then access INI file 1, then read some stuff from the registery, etc. etc. - that is not concurrent I/O. An SSD will be able to zip through those tasks about 100X faster than any HDD-based RAID array. That's why random I/O performance is so important.
 

sub mesa

Distinguished

Firstly, Serial ATA communication is bi-directional and full-duplex, meaning the operating system can send requests, while at the same time receiving information. Communicating with the electronical components goes at near-wire speed; 300MB/s but its best expressed in latency (ns).

Second, modern SSDs and all still produced HDDs have DRAM memory chips onboard, which act as quick but volatile memory to store requests and use as internal "RAM" for internal storage purposes. This memory is very fast, like your own RAM. Without this, both SSDs and HDDs would be very slow. They would indeed have to process things in serial order without the ability to continue working straight after completition of an I/O. So in order to keep that actuator busy as much as possible, you need some form of buffering and queueing. That's why operating systems can send multiple requests and harddrives lie about data being written while in fact its still in their DRAM. They do this so they can receive the next request already, while in the mean time writing that data to the actual mechanical storage device. So a HDD actually has an electronic side and mechanical side.

Because of the nature of SSDs, a completely new or heavily revised protocol should come into play, to allow operating systems to exploit the benefits of flash storage, and also to work around the weaknesses. While many work in the past in terms of optimizing I/O was to make things more sequential and reduce seek time. While that works great for HDDs, SSDs have no benefit of this strategy and it may even cause lower powerformance than without the optimizations. With a new extension coming into play, SSDs could have a very deep queue of requests that it has to handle, and process them in the most parallel way possible. Because flash cells work independently from eachother, the possibility for endless parallellization is present.

Honestly, this is like the importance of Fusion power to the energy crisis but for the storage industry. Harddrives just aren't very good storage mediums, and flash based storage, if affordable, has many benefits both theoretical and practical. The race now will be who designs new flash controllers utilizing more parallel processing potential, or perhaps even a new interface to queue I/O's more efficiently. Currently SSDs like Intel use NCQ to queue I/O requests, while this was originally meant to reduce seek times on mechanical storage. :)
 

sub mesa

Distinguished
sminlal: i agree in all you said. But there is a minor exception: RAID1's could be used to reduce seek times slightly.

The split I/O strategy, found in BSD geom_mirror software RAID1 driver, sends the same I/O request to both disks. Who ever responds quicker 'wins'. The benefit here is a 50% cut on the rotational delay. At any point in time two disks have different head positioning and their rotations aren't evenly matched. So if you send an I/O at that time one of them will be faster. This way you can shave off some of the total service time of the I/O request, but it's not that much. The new 'load' algoritm works very well to increase IOps of RAID1, which is quite uncommon since most RAID1's perform like a single disk.
 

kerdika

Distinguished
Sep 22, 2009
101
0
18,690
http://www.tomshardware.co.uk/forum/page-244279_14_0.html#t1777969

disk for disk ssd is faster but dollar for dollar they are close to the same.....

one x25-m 80gb cost $280 and has RI/O rates of about 58mbps*
one savvio 15k.2 36gb cost $20 and has an RI/0 rate of about 3mbps*
14 savvio 15k.2 36gb cost $280 and has a RI/O rate of about 42mbps**

its important to note that the 14 savvios will have 500 gb of storage vs the ssd with only 80

*8kb http://images.anandtech.com/graphs/ssdfortheenterprise_011909095250/18583.png and http://images.anandtech.com/graphs/intelx25mg2perfpreview_072209165207/19506.png
** calculated

 
Yes, of course you are absolutely correct. But as you said the gains are essentially negligible with respect to the access time of an SSD.

Interestingly, that strategy is probably useless with SSDs, since both should respond in the same amount of time (if they're both mirrored drives in a RAID 1 set then the internal allocation of blocks ought to, in theory at least, be identical).
 
And again, that I/O rate is only obtainable when you have enough concurrent I/Os to keep all the drives busy.

Arguing about disk performance is like arguing about the economy - it totally depends on which factors YOU consider important vs. those which your opponent considers important... :??:
 

sub mesa

Distinguished
Also note that the new X25-M G2 is better at random read than the SLC-based X25-E. Any benchmarks performed at the release of the X25-M are outdated because those still had firmware issues with preventing performance degradation due to oversized mapping clusters. This got fixed in the new firmware and the G2 adds even higher random read and even lower read latency than the SLC product. The SLC product has life expectancy benefits the MLC flash products can't achieve, though. But with Intel's adanced wear leveling this shouldn't be a problem but most if not all practical uses.
 

cirdecus

Distinguished



As far as cost is concerned, go with the SSD's. SAS still requires the controller, which i know you have, but if something changes, the SSD's will work with any SATA controller out there. The technology is better and I understand that you can take a gazillion SAS drives and balance out performance of 1 SSD drive, but don't do that. The cost is there, but you're better off stepping up to SSD's.
 

kerdika

Distinguished
Sep 22, 2009
101
0
18,690


here are my two big problems with the current ssd's:

they are not SAS witch is a much more efficent way of doing things

and the are too expensive to consider, shure there are the cheap ssd's out there but my raid 0 15k's walk all over those. the good ones (intel and such) are just to expensive for me to justify.

i originally posted this compare my 2 15k savvio's to a "cheap" ($100) ssd.
 

wuzy

Distinguished
Jun 1, 2009
900
0
19,010
Good lawd, your posts give me headache...

they are not SAS witch is a much more efficent way of doing things
The advantage SAS has over SATA mainly relates to SAN environments which I'm not going to list each and everyone of them. As I've said before SAS vs. SATA (just the interface, not drives) has nothing to do with performance.

and the are too expensive to consider, shure there are the cheap ssd's out there but my raid 0 15k's walk all over those. the good ones (intel and such) are just to expensive for me to justify.
MLC-Indilinx based SSD are good alternatives to X25-M and frequently approaches $2.6/GB or less if you spot a good deal on website like Slickdeals. You obviously have not seen the data of SSD performance (either in server or desktop work patterns) and blatantly spew useless BS. The long comment I made below which composed data for database, webserver and single-user usage when comparing 15k HDD to SSD.

As said before, if you tried something like virus scanning, WinRAR compression and loading up a large app all at the same time, even your RAID0 15k array will crumple in performance, a single SSD will not.

FYI, as a sys admin part of my job is to build different storage arrays for different purposes whether it's for our workstations or servers. Analyzing data cognitively is what I thrive at.

Lastly, learn how to spell and punctuate properly.

I'm not going to waste any more time in this thread and loss more brain cells to your lunatic arguments.
 

kerdika

Distinguished
Sep 22, 2009
101
0
18,690


Wow your are a dick.

is that good spelling and punctuating? did it ever cross your mind that im here to learn as well as explore avenues no one has before?

280 bucks or any thing over 100 is ridiculous for a disk. as for multitasking, i can winrar and playgames/open pdfs, all at once and i dont have to wait, for that matter i peak my cpu before i can load up the disk.

MY DISK ARRAY COST ME $40!!! go buy an ssd with the same performance for that....
 

MRFS

Distinguished
Dec 13, 2008
1,333
0
19,360
Lots of good thoughts above: here are my 2 cents:

A LOT depends on the nature of the workload you intend
to give your storage subsystem.

Because the technology is rather mature, particularly
with the widespread availability of perpendicular magnetic
recording and the largest caches e.g. 64 MB per HDD,
a multi-drive RAID effectively multiplies that cache
for RAID-0 arrays: e.g. 4 x HDDs @ 64MB cache = 256 MB cache.

Given current SSD prices, it's hard to beat rotating platters
in terms of cost per gigabyte -AND- in terms of performance
that is acceptable for SOME workloads -- but NOT ALL workloads.

If fail-safe redundancy is ABSOLUTELY NECESSARY, then of course
a RAID-0 is the WRONG WAY.

A LOT also depends on the power of your RAID controller(s):
a hardware RAID controller with a very large on-board cache,
particularly one that uses an x8 lane PCI-E 2.0 interface,
is a must for any serious storage subsystem.

We've been looking at storage technology for a long time,
e.g. ever since super minicomputer days, and it is clear
there is an ever widening performance gap between CPUs and RAM,
on the one hand, and rotating platters, on the other hand.

Rotating disk drives are getting much larger
without getting much faster, in general.

Even THE fastest SAS/6G HDDs at 15,000 rpm cannot approach
the extremely low access times of modern SSDs.


We have decided to wait, and our next purchase will
probably be a SAS/6G controller, like Intel's RS2BL080
or RS2BL040:

http://www.intel.com/Products/Server/RAID-controllers/RS2BL080/RS2BL080-overview.htm

Then, as soon as SATA/6G SSDs become more widely available,
we expect that the prices of SATA/3G SSDs will fall,
giving us more options to choose from, and also
it's more likely that SSD technology will have matured
even more by then.

For example, Intel's Matrix Storage Technology
ICH10R does not (yet) support the TRIM function,
to my knowledge. (Please someone correct me on
this point, if I am in error.)


So, a safe approach, for now, is to invest in a powerful PCI-E
SAS/6G RAID controller and wire it to fast 15,000 rpm
SAS/6G HDDs like Seagate's Savvio 15K.2:

http://www.seagate.com/www/en-us/products/servers/savvio/savvio_15k.2/

SATA/6G SSDs should be plug-compatible, by the
time they do become more widely available.
SATA/3G SSDs are already pushing the 300MB/sec. limit
of the SATA-II interface, so it's reasonable to expect
SATA/6G SSDs to exceed 300MB/second.


We also just installed 2 of these Enhance Tech X14's,
in anticipation of migrating to the 2.5" form factor:

http://www.newegg.com/Product/Product.aspx?Item=N82E16816201054


The X14 is much better built that Athena's comparable SATA backplane
unit, which suffers from a flimsy SATA port connection to the PCB
(breaks loose when a tight SATA connector is pulled out). See
Customer Reviews of the latter here:

http://www.newegg.com/Product/Product.aspx?Item=N82E16816119006

This also happened to us:

"Cons: After one insertion and removal of a SATA cable into one of the SATA sockets a simple removal of the cable has ripped the socket right off the printed circuit board, leaving the socket with solder covered leads dangling at the end of the SATA cable."

So, we recommended the X14 to Newegg, and I'm happy
to say that they agreed with our recommendation,
even though the X14 is much more expensive.


MRFS
 
If you could truly duplicate the performance of an SSD with a cheaper alternative, then I would agree. But you can't, as has been explained in several posts in two threads. You can improve SOME aspects of performance, but not ALL of them.

If your workload needs the kind of performance that SSDs excel at, then only you can determine if the extra cost is worth while. They may still be too expensive for YOU, but that certainly doesn't mean they're too expensive for everyone.