goldy2

Distinguished
Aug 6, 2006
8
0
18,510
Am looking at buying a server for my database (FileMaker Pro Server). It requires backing up a large amount (4 gig or so) of data every hour or two to another drive. They have an best practices document (somewhat old however) that states that it is advantageous to have two raid controller cards, one with the live database, and one with the backup (so they are on two different physical drives/drive sets) for speed. Also they recommend having the system paging file a different physical drive then the database (while the database server is running it is constantly reading/writing from both the paging file and database).

These days controller cards can support multiple raid sets correct(different channels?)? Is there any advantage to having 2 controllers vs. one? Maybe if the first fills up one controllers cache? Having talked to hp they seem to think there is no need for another controller.

Thanks..
 
Solution
Hi again.

1) 3Gbps is 3 gigabits, not gigabytes. To see how many bytes, you take 3 gigabits, which is also 3000 megabits and divide by 8. There are 8 bits in 1 byte. My explanation of the speed of my 3ware Raid 5 was meant to show the performance impact that Raid 5 has. If I were to take those same 4 drives and run them in Raid 0 which stripes data across all 4 drives, then the read and write speed should average 320 MB/s. The average speed of each drive is 80MB/s, so that is why he Raid 0 speed is 4x80=320. However, Raid 5 is different. Frist off, Raid 5 loses 1 drive due to parity storage. So, with 4 500GB drives, you lose 1 and have 3-500GB. Those 3 drives average 90MB/s, so 90x3=270 MB/s max Read speed but writes are different...

Gatorbait

Distinguished
Jan 29, 2009
63
0
18,640


Hi Goldy2,

You can have 2 separate RAID configurations on one controller. The only advantage of having 2 controllers is host bandwidth (PCIe from the card to the host). However, 4GB of data is not that much data at bus speeds today (PCIe is 2.5Gb/s and SATAII is 3.0Gb/s). If you get a RAID controller, make sure it is a HW RAID controller (Has on board DDR Memory). This way your CPU is not taxed with xor calculations or bogs down shared bandwidth memory on multi-core Processors. A UPS is a must unless you want to pay for the BBU (Battery backup unit) for the memory on the RAID controller.

Recommendation: one RAID Card.

Good Luck,
Gatorbait
 

goldy2

Distinguished
Aug 6, 2006
8
0
18,510
Thanks Gatorbait! Sounds like one card is the way to go. Am curious though what you meant by "The only advantage of having 2 controllers is host bandwidth"? In what case (example?) would it make sense to have 2 controller cards.

While the server is backing up the database is paused and users can't do anything so even a few seconds would help..

Thanks again
Goldy2
 

Gatorbait

Distinguished
Jan 29, 2009
63
0
18,640
Since your application is running on your host CPU, I believe that your transfers would require the data to go to host memory before going back to the card to be written on the other drives. Thus, if you have two cards, your band width to the respective drives would allow greater band with for other applications needing to use the main drive. Also, RAID5 writes are slower than reads and your other applications could be hitting the host drive with out much affect on the second card throughput.
 

specialk90

Distinguished
Apr 8, 2009
303
0
18,790
2 vs 1: It greatly depends on the Raid arrays you will be using - 1/10/5/6. If the backup is on either a 5 or 6, then you certainly want 2 controllers. If speed and less downtime is important, than 2 cards is a must. I can tell you from experience using a 3ware 8port sata PCI-Express x4 card with a 4-drive Raid 5 and a 4-drive Raid 10 on the same card. It slowed down when writing to the raid 5 coming from the Raid 10, but not when writing from my other raid 10. It also slowed when writing to the 3ware Raid 10 from the Raid 5, just not as much.
Raid 5 and 6 especially, require a good hardware controller with a dedicated cpu, such as Intel's IOP 348, which is dual core and can give great Raid 5/6 write speeds. My 3ware card provides about 65% write throughput for Raid 5. (4drives - 1=3, 3 drives @ 80MB/s average write each= 240MB/s aggregate throughput. 3ware write speed=156MB/s. 65% of 240).

Also, since this is for business/work, a 2nd card is better to have for less downtime of the server, whether the database or backup. With 2 cards, if 1 fails, you can migrate its array to the other card as long as there are enough ports and wait for a new one to arrive. With 1 card, your entire server is down. FYI, the 'bus' speeds for PCI-Express are listed as Full Duplex(Combines incoming plus outgoing data throughput) so it must be divided by 2 for max read or write speed.

You are looking at over 1TB back up per month so what is your plan so far for number of drives for database and # drives for backup?
 

goldy2

Distinguished
Aug 6, 2006
8
0
18,510
Thank you for your great answer. So the bottleneck is the PCI-Express speed, just seem to be getting different answers here.. 3 Gbps vs. 240 mbps. The thought was to do 2 Raid-5's though am considering Raid-10.

To answer your question the backups will overright themselves each day and go to tape each night. So size of drives is not a big issue.

(just funny that it stunned the hp sales guy as he seemed to have never heard of a server with multiple raid cards.)
 

goldy2

Distinguished
Aug 6, 2006
8
0
18,510
Thank you for your great answer. So the bottleneck is the PCI-Express speed, just seem to be getting different answers here.. 3 Gbps vs. 240 mbps. The thought was to do 2 Raid-5's though am considering Raid-10.

To answer your question the backups will overright themselves each day and go to tape each night. So size of drives is not a big issue.

(just funny that it stunned the hp sales guy as he seemed to have never heard of a server with multiple raid cards.)
 

specialk90

Distinguished
Apr 8, 2009
303
0
18,790
Hi again.

1) 3Gbps is 3 gigabits, not gigabytes. To see how many bytes, you take 3 gigabits, which is also 3000 megabits and divide by 8. There are 8 bits in 1 byte. My explanation of the speed of my 3ware Raid 5 was meant to show the performance impact that Raid 5 has. If I were to take those same 4 drives and run them in Raid 0 which stripes data across all 4 drives, then the read and write speed should average 320 MB/s. The average speed of each drive is 80MB/s, so that is why he Raid 0 speed is 4x80=320. However, Raid 5 is different. Frist off, Raid 5 loses 1 drive due to parity storage. So, with 4 500GB drives, you lose 1 and have 3-500GB. Those 3 drives average 90MB/s, so 90x3=270 MB/s max Read speed but writes are different. With Raid 5 writes, due to the way it works, there is a write speed penalty which greatly depends on the Raid controller. With a good controller, that 270MB/s max speed only incurs a slight speed penalty down to 200-230MB/s. I hope I didn't confuse you anymore.

The Highpoint 4320 8 port SAS/SATA Raid controller is on sale at newegg for $390.
http://www.newegg.com/Product/Product.aspx?Item=N82E16816115056

This card is the best card at this price by far. I would trade my 3ware 8port card for this even though I paid $600 for mine. This Highpoint has Intel's IOP348 processor which is what is used to calculate parity in Raid 5, and the IOP348 is one of the best on the market. Also, the card is PCI-Express "x8" so the PCI-Ex bus shouldn't be a limiting factor. Don't worry that its SAS & SATA, that just means if you want to upgrade to SAS drives some day, you can.

I would highly suggest getting 2 of these cards because of the great sale price and keep one as a spare.

For your drive setup:
1) OS + Page File on a 2 drive Raid 1. This is the standard in servers.
2) 4-6 1.5TB or 2TB drives in Raid 5 for Database.
3) 2 - 250GB drives in Raid 1 for daily backup

Both Raid 5 and Raid 1 can be on the same Raid controller. However, if the time it takes to backup needs to be as quick as possible, then using another controller for the Backup drives is needed. What about the onboard Raid that comes on the motherboard? If the motherboard is Intel-based, then it should already have a decent Raid controller built in. Using this controller, you can create your Raid 1 for OS and also a Raid 1 or even Raid 10 for the backup drives. This right here will help with speeds quite a bit.

I figured your database would be very large due to the amount of daily backup so that is why I chose 1.5-2 TB drives.

Another extremely important reason to use separate controllers is the database will grow and you will need to add more drives down the road. Also, you should get 1 extra drive for the Raid 5 to use as a hot-spare that can be used to rebuild in case of a drive failure.

I would get a good UPS also as that will help keep your entire system up.

If you need an entire system built, many of us here would put together the parts list for you.
 
Solution

sub mesa

Distinguished
specialk90 has given you excellent advice, i suggest you follow it. :)

One addition might be the use of a Battery Backup Unit, which hardware controller typically have an option for. They cost about $75-$100 i guess. Although with an UPS you're already quite safe from power interruptions, there are still residual risks with using RAID5 write-back controllers with large amounts of internal memory. If you require true server-class write-back "protection" you need a BBU also. Should your OS crash, have a driver bug, have a problem in your power supply then the UPS cannot help you and you might loose data or even a corrupt filesystem.

Hope i didn't scare you too much, it's just a question of how much safety is a priority to you.

Cheers!
 

specialk90

Distinguished
Apr 8, 2009
303
0
18,790
Sub mesa, Thanks for the credit. Also, what is your thought on using "Desktop" drives vs "Enterprise" drives? Having just read about many problems people have been experiencing with "Desktop" drives in Raid, Enterprise class drives should be used for their TLER-like function for Raid use in addition to the fact that they are designed to run 24/7 under constant load. Plus, they have higher IOP numbers. However, for the TLER function, good hardware controllers should be able to mitigate such problems. At least it seems that way with what my 3ware controller says.
 

sub mesa

Distinguished
The drives i recommend are the WD Green ones, which also come in 1,5GB and 2,0GB capacity. These run actually at 5400rpm but are still very modern disks with high data density (meaning generally high speed for sequential operations). So while there is a slight performance penalty they do run very cool as they use only half the energy of other 7200rpm drives. That could be a good thing for a server running multiple drives, so you don't have to use fans to cool your disks and you don't have excess heat production in your casing.

About TLER.. it all depends on how the controller deals with a disk that is not responding. Some may simply wait for the drive to respond indefinitely; and in some cases it may take like a minute for a drive to repair damage or abort its attempts for successful recovery. All TLER does is reduce the time the disk attempts to recover a bad sector to 7 seconds.

So instead of allowing the drive more time to try to repair the bad sector, the thought behind TLER was that the RAID controller would handle the problem and not the disk itself. So what happens, the RAID controller kicks the disk out of the array and the array goes into DEGRADED mode. Best practise would be to immediately replace the disk and let the array rebuild. TLER makes sure a server never has to wait longer than 7 seconds for I/O handling; but this hardware feature could have just as easily come from the controller, or the software is its software RAID.

Without TLER, its possible to have a hanging or non-responsive system for longer than 7 seconds, IF the controller allows that. The controller could just as easily have imposed a maximum of 2 second timeout before it kicks the disk out.

So TLER isn't holy by any means, its overrated for any home user. Its actually meant to stop important servers from having a hick-up as a disk tries to recover data. They have 30 spares lying around that important server so they would be more then willing to replace any potentially faulty disk without further investigation. That is mostly not true for home users.

Also, not only RAID Edition drives offer TLER, they just have it enabled by default. Checkout this wiki for some great info:
http://en.wikipedia.org/wiki/TLER
 

specialk90

Distinguished
Apr 8, 2009
303
0
18,790
Thanks Sub mesa.

For a database server as well as other hard drive intensive servers, you would be better off using drives as fast as possible within your budget and storage requirements. For a database, the need for faster drives depends on the number of clients/users accessing it. If 1-10 clients, the Green drives would work. For more than 10 clients now or later, regular 7200rpm drives would work better. For even better performance, Enterprise drives perform 15-20% faster than their desktop equivalent. For example, the Seagate 7200.11 is the desktop version and the Seagate ES.2 is the Enterprise version. The ES.2 also happens to be the fastest 7200rpm drive in terms of IOP, almost as fast the Velociraptor.

I wouldn't worry about heat from several drives because the current generation of drives don't use much power and don't produce much heat. I have 4 Raptors and 4 7200.11s in one case and the drives never exceed 38 C. At one point in the same case, I had 8 7200.11s & 4 Raptors, and I didn't have any sort of heat problem. However, my 4- 7200.10s, now 2 generations old, were far hotter than the 7200.11s.

The best thing to do with several drives is get a 4-in-3 drive bay where 4 drives fit into 3 5.25" slots. There are several available, but you should want one that offers hot-swapping, which means you can pull a drive out while the server is up & running.

A 3-in-2 IcyDock $75, very good cooling and easy hot-swap
http://www.newegg.com/Product/Product.aspx?Item=N82E16817994026

A SAS/SATA 3-in-2 IcyDock. $78, if you ever want to use SAS drives
http://www.newegg.com/Product/Product.aspx?Item=N82E16817994052

SATA 4-in-3 Backplane, Athen Power, $92, Good cooling
http://www.newegg.com/Product/Product.aspx?Item=N82E16817995005

SATA, 3-in-2 IcyDock, supports Raid, $98
http://www.newegg.com/Product/Product.aspx?Item=N82E16817994069

Sub mesa, maybe you can shed some light on whether these backplanes need to "support" Raid in order to be used in a Raid array. The Athena says it supports Raid but some of the IcyDocks don't.
 

sub mesa

Distinguished
specialk90, i would say database scalability depends on RAM much more than disk performance. In reality, the whole database or the part of the database than can be considered in "active use" will be cached in RAM. If this happens, only mutations (writes) will reach the disk and the disks don't have to serve read requests. This is very advantageous to performance as disks don't like switching between read/write often.

About the docks, i don't know if these implement any electronics or internal RAID; like some more expensive ones do. Those often have only one SATA port and this one has four; one for each disk. That leads me to believe this simply connects the harddrives so you should be able to use this with both hardware raid and software raid. In this case it wouldn't be any different from not using the dock and connecting the disks directly. Check to be sure, though.

sub mesa, are you saying you'd be ok with using Desktop drives on a hardware RAID?
Yes i would, and have been running such a setup for years, with my Areca ARC-1230 controller.

As i have discovered, cheap hardware often offers more value for your money as you can buy more of them and use in a redundant fashion. As i understand, the RAID-edition disks do not compose of any different hardware than the regular desktop versions, making the "not suitable for RAID or 24/7 usage" argument pretty dull. The RAID-edition disks are a way to make you spend a little more on the same disk; the only real plus that i can see is the added warranty. But that won't make your disk die less fast, so i'm not convinced of a difference in reliability.

Its possible though that by testing the manufacturers pick the best samples to be the RAID-edition versions, like CPU makers do. But in this case what is there to test? Bad production making the drive more vulnerable than usual to vibrations or metal expansion/contraction (caused by temperature variation) is not that easy to test and with the low profit per disk they don't want to add lots of additional cost to the product. So i have my doubts here, too.

I've owned maybe a hundred disks thoughout my life, and many are still running or useful in some way. I don't think reliability is that important, since all HDDs are not reliable enough to run without backup or redundancy for data you don't want to loose; unlike SSDs which can provide enough reliability to warrant using no redundancy; although a backup should be used to cover other risks, there would be no point for home users to put redundancy on SSDs for large amounts of storage; its not like an SSD will crash. It'll either become damaged by misuse or very ugly power failures or it'll reach the end of its lifecycle meaning you can only read but no longer write to it. For a home user, that means an SSD disk is trustworthy enough to be used without redundancy; so use RAID0 on it by any means. As long as the RAID-driver isn't bugged, there should be no apparant risk.

Oh, and checkout these new 2TB WD RE4-GP disks:
http://www.pcper.com/comments.php?nid=7054

Note that the article does state the RE4 disk to be "constructed to higher tolerance" but i doubt that means they are made in seperate factory lines; that would be very costly and generally the industry chooses to have one factory line and simply disable functions in firmware for the cheaper versions. That's also why sometimes you can 'unlock' features with unsupported hacks, because in the end the hardware is all the same. Not saying this is the case, just my general experience with industry. ;-)
 

goldy2

Distinguished
Aug 6, 2006
8
0
18,510
Thank you very much for all the information! (especially specialtalk/sub-mesa). The database size is actually really small (sorry should have pointed this out earlier), about 3 gig but should grow to maybe 10-15 gig. I’m very much a software guy but have been able to follow along.. Unfortunately the system was ordered (6 250gb 7500 drives, 1 onboard raid, setup as 2 raid 5’s) but can learn from this and maybe adjust this system down the road.. guess my lesson is to not listen to hp salespeople:)
 

specialk90

Distinguished
Apr 8, 2009
303
0
18,790
Is "onboard" raid mean the motherboard raid or 1 PCI-Express raid controller?

I ask because a lot of people have had problems with Intel's onboard Raid, especially in Raid 5 and even more so if the drives used were not designed for Raid. If the drives aren't designed for Raid and used on the onboard raid, they can and will "fall out" of the array.
 

Ignatowski

Distinguished
Feb 23, 2009
133
0
18,690
the main thing to watch on hp DL systems using the embedded Px00 raid controllers is how the drives are isolated on the backplane.

example. the dl385g3 has 8 sas/sata ports. ports 1-4 are 1 backplane and ports 5-8 are another. to get best performance from this you should keep the raids isolated
ie your os drives in ports 1 and 2 ( raid 1) and the data drives in ports 5-8 (raid 5 or 10). 4 drives will max out 1 side of the backplane. so isolating the os/swapfile from the data leaves the data backplane only for requests to the database. striping across the backplanes tend to result in poor performance when the system pages.

example 2. the dell 2950 using the perc 5/6i controllers ( in 3.5 drive config) slots 1-4 are one backplane and slots 5-6 are another. dells stock config is to raid slots 1 and 2 and then raid 3-6. however testing indicates that using slots 1-4 for the data raid and slots 5-6 for the os gives much better performance.

most newer hp DL systems support floating drive locations so moving the drives while the server is powered off will still keep the raid intact.

you can tell which raid controller your system is using with device manager or the hp array configuration manager.

the ML series of servers use different raid controllers than the dl series. i dont recommend using the embedded raid controllers on the ML series for anything other than the OS in raid 1


if you post the exact hp model of the server and the drive form factor i can give you a better recommendation. ( ie there are 5 series of hp dl585's and 3 different drive config possibilites)