Sign in with
Sign up | Sign in
Your question

SAS or SATA for new file server - please advise

Last response: in Storage
Share
January 21, 2008 11:05:38 PM

I'm upgrading my file/print server at work, which serves about 50-100 concurrent users and hosts 30 network printers. Should I go SATA or SAS(SCSI)?

SATA drives are so much cheaper that you can easily do RAID-10 for less $$$ than SCSI RAID-5. From what I understand RAID-10 should give better performance for a file server; please correct me if I'm wrong.

I understand SCSI drives have lower seek times and tend to be more reliable, but does it matter for a file server?

Another factor is # of drives. For the new server I need 2TB total space - which is easy to do with SATA (4 x 1TB drives = 2TB RAID-10). But to get this capacity in SCSI I'd have to resort to RAID-5 (6 x 400GB drives = 2TB RAID-5). That's a lot of drives, which means more points of failure, which would seem to partly offset the reliability advantage of SCSI drives.

What do you guys think - please advise!
January 22, 2008 6:18:05 PM

50-100 concurrent users is SAS/SCSI territory. You're going to have a decent amount of concurrent requests, and the SAS/SCSI drives have much higher IOPs than SATA.

Sequential transfer rate doesn't matter much here, so RAID 10 doesn't buy you anything. An enterprise-level RAID card (LSI or 3Ware), RAID 5, with SAS drives will perform very well.
January 25, 2008 12:59:43 PM

I use primarily 3ware and Adaptec (cuz they work!) stay away from LSI cuz they did NOT! as far as drives go use enterprise quality drives due to this machine being on all the time. these drives have a higher MTBF rate. go with SATA drives from a quality company. I use WD due to the quality Seagate has a better return policy though and the drives seem to last as long. Dont go with higher capacity unless real estate is an issue, the more drives you have can work to your benefit in access times and over all performance. If you are looking for over all storage vs drives and you do have the drive space available go with 500Gb drives vs the 1Tb drives for this reason. you might want to go with a RAID 6 vs RAID 5 for redundancy. I have built up to 32Tb in expansion units and found that the more drives you use (up to about 30) the better it is for access times. As far as single units I commonly build 14 drive arrays and use 500Gb WD drives RAID 6 then use 2 X 36Gb drives for the OS all in a 16 bay enclosure on either Adaptec or 3ware controllers this works great. The OS is either Win 2003 server SP2 or some other Windows product. Sadly Linux distributions will not work well on these cards. Other then that if I can help out please respond
Related resources
January 29, 2008 9:05:18 PM

Thanks for your help. So, neither of you think RAID-10 is worth doing for this application?

Also, regarding RAID cards do you guys have any opinion on the Dell PERC-series controllers? I understand these are rebadged LSI megaraid cards.
January 29, 2008 11:17:50 PM

No, RAID-10 is not necessary. With most modern controllers, sustained RAID-5 reads will exceed the bandwidth of the Gigabit Ethernet connection on the server. Making the disk array faster doesn't serve more data.

I have several Dell PowerEdge servers that have the PERC RAID cards, and they work extremely well for RAID-1 applications. I use them only in that role, where the RAID-1 is protecting the C: drive. You're correct, the PERC cards are all manufactured by LSI.

All of my data is outboard my servers on an iSCSI SAN, so I have no experience with the PERC cards as RAID-5 host. However, I suspect they would work pretty well.
January 31, 2008 4:47:06 PM

My main concern is with write times. The way my users are setup, their desktop & mydocs are all stored on the local hard drive, but are synchronized to the server at regular intervals (randomized to occur about 6x per day).

So the "working copy" of all their data is read off C: not the server. What this means in practice is the majority of I/O on the server is write operations. It also happens that the write operations need to be as fast as possible, while the read speed is far less important.

There are also group folders on the server, for each dept or section, and for those files, the #'s of reads and writes would be the same. These are typically very small files, though - so speed doesn't really matter.

So basically I need a RAID setup that's as fast as possible for writes - which is why I'm leery of RAID-5. And the reason I'm considering SATA is because 2TB RAID-10 w/SCSI would be so crazy expensive.

Another thing is, I'm wondering if I go SAS RAID-5, would that actually end up being slower than SATA RAID-10, because of the write-performance penalty inherent to RAID-5?
January 31, 2008 5:09:34 PM

Well, the write performance penalty is very controller- and server-dependent.

The 3Ware 9650SE can sustain 600 MB/sec RAID-5 writes. That far exceeds the network transfer capability of Gigabit Ethernet, which is 125 MB/sec ideally, only about 40-50MB/sec when transferring files over SMB with Windows Server 2003.

Another thing that can really affect a file server is the amount of RAM. Windows Server 2003 is very smart about caching frequently used and open files, and caching the write requests. If you set up a file server with 4-8 GB of RAM, your array speed will hardly matter at all unless your users frequently need to write 500MB of data to the server.

Let's narrow down some specifics:

How much storage space do we need right now?
How much will you need in 2 years time?
Do you have Gigabit Ethernet to all the desktops, or just in the server room?
What server are you thinking of purchasing?
Do you have other servers that have storage needs? (E-Mail? Database? Virtual machines?)

Once these questions are answered I'll have a better understanding of what path you might want to investigate.
January 31, 2008 11:18:11 PM

Quote:
Let's narrow down some specifics:

How much storage space do we need right now?
How much will you need in 2 years time?


1TB would hold us for 1-2 yrs.
2TB should should take care of our needs for 3-4 yrs.

Quote:
Do you have Gigabit Ethernet to all the desktops, or just in the server room?


Gigabit to all servers and about half our PC's.

Quote:
What server are you thinking of purchasing?


If I end up having to go with SAS, I'd probably get one of the Dell Poweredge 2900 series.

If it turns out that SATA is good enough, I'd look at the Dell Poweredge 840.

Another option I've seriously considered is just get an Optiplex 755 configured w/no hard drives. Then buy a pair of Seagate 1TB hard drives and do RAID-1 (the 755's motherboard has a basic RAID capability). Or get a cheap 3Ware SATA RAID card. Either way this option would be super-cheap and thus easy to replace/upgrade in a couple of years.

Quote:
Do you have other servers that have storage needs? (E-Mail? Database? Virtual machines?)


No - our email and database apps have their own dedicated servers.

Quote:
Once these questions are answered I'll have a better understanding of what path you might want to investigate.


Hey thanks again for all your help!
February 1, 2008 1:14:31 PM

I configured up a Dell PowerEdge 2950 III on Dell's web site, in the following config:

2x Dual Core Xeon 5110 1.6GHz
4GB (4x1GB) DDR2-667 RAM
No OS
1x6 Backplane for 3.5" drives
PERC 6i SAS Controller with 256MB Cache
2 bays in RAID 1 with 80GB SATA 7200RPM drives
4 bays in RAID 5 with 500GB SATA 7200RPM drives
Rapid Rails
Redundant Power Supply
Broadcom NetExtreme II Dual Gigabit Ethernet
8x DVD
Basic 5x10 NBD HW 3Yr Support

for $4499.

This would give you 1.5 TB of storage in RAID 5, with a RAID-1 protected C: drive.

If this somehow ended up not being fast enough (I seriously doubt), you can reconfigure the 4x 500GB drives for RAID 10 and have 1TB of storage. Later, to expand the storage, replace the drives with 750GB or 1TB drives.

Another cool thing is that the PERC 6i can use SAS or SATA drives, so if you need even more speed at a later time you can replace the drives with SAS drives.

Another option since you have other servers is thinking about moving to centralized storage. Check out the Dell MD3000. This is an iSCSI array that can be used to centralize storage for all servers on your network.

The idea is that if a server (any server - file, e-mail, database) runs out of storage space, you can expand the logical drives on the SAN to give that server more storage from the available pool of drives that are installed. The unit also supports volume snapshots, centralized management and monitoring, and several other features.

I have a similar unit from Promise (the VTrak 500i), it has worked out very well. The only think I'm disappointed in with the Promise unit is that its not very fast, but I think the Dell one would work better.

I also have some unique storage needs (we do a lot of video), so I needed 5TB+ of storage space.
August 19, 2009 6:50:01 PM

This is probably too late to be of any use to the OP, but perhaps it will be useful to other people looking to do the same sort of thing.

1) LSI logic makes a lot of "fakeRAID" controllers that require the CPU to do a lot of the work, in fact I have yet to see an LSI product that does true RAID, so I question whether the claim that Dell PERC controllers (which do true RAID and do it well) are actually made by LSI. I'm not saying it's not true, just that I question it. Every Dell PERC and Adaptec card I have used has always been a true RAID controller and proved it through performance benchmarks.

2) Related to 1, and contrary to comments previously made in this thread, Linux support is excellent on Dell PERC and Adaptec RAID controllers as well as HP and most other true RAID cards, while most on-board controllers and every LSI controller I have ever seen have required the use of dmraid (fakeRAID enabler) or software RAID to work properly. Even so, the fakeRAID/software implementations do not perform like a true, dedicated, fully functional RAID controller. In fact, software RAID is often faster than fakeRAID.

Edit: According to a number of sources, all HP, Dell PERC, Adaptec and very few others make true, fully featured RAID controllers. HighPoint, LSI Logic, Nvidia, Promise and VIA make fakeRAID controllers that drain the resources of the CPU to allocate blocks to drives, handle mirroring and do all parity calculations for RAID types that use them.

3) A good, true RAID controller should have no difficulty supporting RAID 5/6 parity calculations faster than the bus can feed them data. The myth that RAID 5/6 is slow is caused by the aforementioned fakeRAID controllers.

I hope this helps someone when they go to implement RAID on a server, fakeRAID has been a huge pain in the behind to me over the years and I wish someone had informed me about the difference a long time ago.
August 24, 2009 9:53:10 PM

You will probably be interested in Thecus N7700SAS. This NAS server is including seven SATA/SAS hard disk bays that accommodate multiple terabytes of storage.
a c 415 G Storage
August 25, 2009 12:02:28 AM

Mistoffeles said:
A good, true RAID controller should have no difficulty supporting RAID 5/6 parity calculations faster than the bus can feed them data. The myth that RAID 5/6 is slow is caused by the aforementioned fakeRAID controllers.
That "myth" is also based on the fact that RAID 5 is MUCH slower at write operations than any other RAID organization. To write to a RAID 5 disk the controller (be it hardware or software) has to read the old parity information, then write the data and also write the updated parity back to disk. And the read/write of the parity can't be done in parallel, meaning the operation takes at least twice as long to perform.
August 25, 2009 4:49:37 PM

Mistoffeles said:
This is probably too late to be of any use to the OP, but perhaps it will be useful to other people looking to do the same sort of thing.

1) LSI logic makes a lot of "fakeRAID" controllers that require the CPU to do a lot of the work, in fact I have yet to see an LSI product that does true RAID, so I question whether the claim that Dell PERC controllers (which do true RAID and do it well) are actually made by LSI. I'm not saying it's not true, just that I question it. Every Dell PERC and Adaptec card I have used has always been a true RAID controller and proved it through performance benchmarks.

2) Related to 1, and contrary to comments previously made in this thread, Linux support is excellent on Dell PERC and Adaptec RAID controllers as well as HP and most other true RAID cards, while most on-board controllers and every LSI controller I have ever seen have required the use of dmraid (fakeRAID enabler) or software RAID to work properly. Even so, the fakeRAID/software implementations do not perform like a true, dedicated, fully functional RAID controller. In fact, software RAID is often faster than fakeRAID.

Edit: According to a number of sources, all HP, Dell PERC, Adaptec and very few others make true, fully featured RAID controllers. HighPoint, LSI Logic, Nvidia, Promise and VIA make fakeRAID controllers that drain the resources of the CPU to allocate blocks to drives, handle mirroring and do all parity calculations for RAID types that use them.

3) A good, true RAID controller should have no difficulty supporting RAID 5/6 parity calculations faster than the bus can feed them data. The myth that RAID 5/6 is slow is caused by the aforementioned fakeRAID controllers.

I hope this helps someone when they go to implement RAID on a server, fakeRAID has been a huge pain in the behind to me over the years and I wish someone had informed me about the difference a long time ago.


To my misinformed friend:
LSI makes a plethora of true RAID controllers (where the RAID calculations are done on the Adapter). To determine this is quite simply look to see if DDR memory is needed on the adapter itself. Go to: http://www.lsi.com/storage_home/products_home/internal_... -everything but the Entry line is true RAID. LSI not only designs RAID HBAs but designs its own ROCs (RAID ON CHIP) and RAID FW and SW. Their focus has not traditionally been the channel or Retail market (but has recently bought 3-ware, so maybe this will change), but instead focuses on the OEM market. They sell into most of the worlds biggest OEMs, and yes, i believe that does include DELL.
September 9, 2009 8:13:51 PM

Well, they may make Dell's (non-embedded) RAID cards properly, but the embedded LSI Logic RAID in my Asus servers were definitely FakeRAID, definitely crap, and had to be replaced with the real thing from Adaptec.
October 22, 2009 8:31:07 AM

paulcooperorama said:
I use primarily 3ware and Adaptec (cuz they work!) stay away from LSI cuz they did NOT! as far as drives go use enterprise quality drives due to this machine being on all the time. these drives have a higher MTBF rate. go with SATA drives from a quality company. I use WD due to the quality Seagate has a better return policy though and the drives seem to last as long. Dont go with higher capacity unless real estate is an issue, the more drives you have can work to your benefit in access times and over all performance. If you are looking for over all storage vs drives and you do have the drive space available go with 500Gb drives vs the 1Tb drives for this reason. you might want to go with a RAID 6 vs RAID 5 for redundancy. I have built up to 32Tb in expansion units and found that the more drives you use (up to about 30) the better it is for access times. As far as single units I commonly build 14 drive arrays and use 500Gb WD drives RAID 6 then use 2 X 36Gb drives for the OS all in a 16 bay enclosure on either Adaptec or 3ware controllers this works great. The OS is either Win 2003 server SP2 or some other Windows product. Sadly Linux distributions will not work well on these cards. Other then that if I can help out please respond


Do you still feel the same way now with all the new hardware out and available? What kind of enclosure do you use, just a server case converted to hold drives and what kind of connections are best to use to the hardrives? Lots of details needed to fill in the blank, how importantt is the cpu and memory in this configuration? I would love to try building one just need a push in the right direction. 3 ware or Atto has been my experience in the past, was not sure what was relevant for the hardrives these days, I know western digital has drives made for Raid use and realize error detection time out was one of the major issues, do you see any of these problems in your arrays, Do these controller cards allow more than one channel?
March 24, 2010 4:24:39 PM

macnalty said:
Do you still feel the same way now with all the new hardware out and available? What kind of enclosure do you use, just a server case converted to hold drives and what kind of connections are best to use to the hardrives? Lots of details needed to fill in the blank, how importantt is the cpu and memory in this configuration? I would love to try building one just need a push in the right direction. 3 ware or Atto has been my experience in the past, was not sure what was relevant for the hardrives these days, I know western digital has drives made for Raid use and realize error detection time out was one of the major issues, do you see any of these problems in your arrays, Do these controller cards allow more than one channel?


...Some months later...

Are you still working on this, or did you finally build something?
!