6 Drive RAID 0 SATA I vs SATA II speed question ICHR9

HamSteak

Distinguished
Apr 28, 2008
3
0
18,510
I am building some new home systems. The motherboard I have chosen for these systems is the Gigabyte GA-X38T-DQ6. This comes with the Intel ICHR9 chipset which allows 6 drives on a RAID 0. I am wondering which would provide the best preformance. 7,200 RPM SATA II drives, or 10,000 RPM SATA I drives. Cost is not really an issue because if I go SATA II I will use the 1 TB variation. These will be server systems, not desktops. I am just looking for whichever one will provide the best preformance.

Also, what are your thoughts on going RAID 10, vs RAID 0. I know this would provide fault tolerance, but this is a development environment, so all critical data is already stored on a seperate backup system already set on RAID 10. How much of a preformance difference would this make?

Thanks,

Ham
 

boonality

Distinguished
Mar 8, 2008
1,183
0
19,310
I always find this interesting. Unless you are testing the RAID controller itself, there is no reason to spend the extra money for the performance in a test bed... Why waste the money? It is better spent on your production servers.
 

SomeJoe7777

Distinguished
Apr 14, 2006
1,081
0
19,280


That didn't really answer the question. I/O to/from/for what? What specifically is this box intended to do? Act as a file server? Web server? Database server? How may concurrently connected users? How many records in the database? What server-side environment (ASP.Net, ASP, CF, JSP, PHP?) What applications will be running? Do the applications access a lot of small files or do they process big files? Will the server be accessed locally or completely over the network? Over the internet?

We can't make any specific recommendations until we know more. "It's a server doing IO". Well, all servers do IO. That's sort of a given.
 

MadHacker

Distinguished
May 20, 2006
542
0
18,980
I would recoment raid10.
granted all information is backed up... but downtime due to 1 HD failing will still take time to reset up considering all would be lost.
 

lashrimp

Distinguished
Nov 12, 2006
38
0
18,530
RAID 5 is supported by ICH9R. Enable volume write-back cache in intel matrix storage manager will give you the performance close to RAID 0.
 

HamSteak

Distinguished
Apr 28, 2008
3
0
18,510
Apparently I have not flushed out the question. I am asking which is going to give the better throughput, the SATA I RAID, or the SATA II RAID. Will the 6 10,000 RPM drives max out the BUS? This is the real question. And boon, yes it is important to have the capability for extreme performance in development environments. That way when you are hammering the hell out of the system, you can find out exactly how it will perform under stress. By your logic, we should all be developing against P2's because it's all the same thing right?

lashrimp, what kind of storage capacity drop would I see in RAID 5 configuration? Because of parity I would really only be able to have the storage capacity of 2 drives, how are the 2 drives going to compete against 6 in RAID 0?

And Joe, since you must know, one of these servers will be running a DB. And the others will be running a custom built server application that will be accessing a multitude of file sizes. I do understand the issue of seek time, but since I am more interested in actual throughput, and the ability of the BUS to handle this many drives being accessed simultaneously, file size and what the server is doing is pretty irrelevant to my question.
 

SomeJoe7777

Distinguished
Apr 14, 2006
1,081
0
19,280


If you truly believe that then you have a gross misunderstanding of how servers work. What the server is doing is at the very core of designing the system for performance.

In summary:

1. A development system has very little need for the extreme edge of performance. That should be done with the production system.

2. RAID 0 on a server is useless and detrimental. The purpose of RAID is availability, and maximizing IOPs for certain applications. RAID 0 is not redundant, decreases availability, and doesn't gain you any performance in the server environment over RAID 5.

3. Disk subsystem throughput is pretty irrelevant for most servers because the network connections are the bottleneck. Windows XP/Vista/2003 can't transfer data at more than about 35 MB/sec over Gigabit Ethernet, Linux and Windows Server 2008 can manage only about 65 MB/sec. If your disk subsystem is faster, that's good for concurrent users, but much beyond 100-120 MB/sec doesn't do anyone any good because you've maxed out your network connections (assuming 2x-4x link aggregation). There are exceptions where a server is doing a high throughput task locally, such as DVD multiplexing or perhaps other video stream related tasks, but those situations are rare. If you run into a situation where a service needs to server more than 100-120MB/sec to concurrent users, it's time to do a server farm with a load balancer.

4. If you're interested in extreme performance, the ICH9R is not a realistic solution. That's consumer-level equipment, and its inadequate for server-related high performance uses.

5. 7200 RPM SATA drives are generally regarded as consumer equipment. 10,000 RPM SATA drives are somewhat better, but all SATA drives are laid to waste by SAS drives in server performance.

6. A single RAID array is not what will get extreme database performance. High transactions-per-second databases need separate arrays for the data and transaction log files, each configured with a different RAID configuration.

7. High end RAID controllers with high amounts of onboard cache are required for high database performance.

If you are unaware of these things and believe that simplifying your server design down to RAID 0 on SATA drives is going to net you high performance, think again. This is a server you're building, not a Crysis/BF2 machine.

But anyway, you seem to know exactly what you want and how to build it, so I'm not sure why you even started the thread in the first place. My advice would be to go build your server the way you want to do it and be happy, and stop asking questions since you don't want to listen to the answers.
 

MadHacker

Distinguished
May 20, 2006
542
0
18,980



I have seen file transfers from windows XP to windows 2003 server in speed 80-85 Meg/Sec using a cheep DLINK switch.
also it is conceivable that the testing he is doing may be virtual clients to test server software where file access throughput must be high.
in the past our company would run simulated client software on the server just to test scalability from 10 users to 100,000 users.
for this a simple in house test machine would need to have high speed file access on a development machine for testing and debugging.
then when it is working well approval for a production system can be used to finalise the testing.
 

shawnf

Distinguished
Jul 29, 2005
7
0
18,510


I would think that 6 modern 7.2K drives probably would max out the throughput. I have a 680i though and I have 3 Raptors. 1 runs at about 100MB/s 3 on the controller in stripped mode runs at about 120MB/s. When I put 2 WD 7.2k drives on the same controller I also get 120MB/s but one drive is about 70MB/s or there about. Since the ICHR9 isn't a dedicated Raid controller the results would be simular although the throughput on the chipset may be higher then the 680i.

The difference is the Raptors will find the files faster to transfer them since there seek time is quicker then a 7.2k drive.

Thats is just my observations on the 680i which is why I am thinking of a dedicated raid controller card to get more throughput.
 

rozar

Distinguished
Jun 7, 2007
345
0
18,780
I dont think the real question here is what drives will provide the most throughput. Since SATA is serial you have 150 or 300 MBps pipe PER PORT depending on the controller. Neither a Raptor or any 7200 RPM drive will fill that pipe. By the way calling a 150 MBps drive SATA I and a 300 Mbps drive SATA II is really wrong since SATA II is just a set of extensions and A Raptor having just NCQ could make it technically SATA II. (By your definition). See more here - http://www.sata-io.org/namingguidelines.asp

But back to the question. The 10,000 RPM drives will have more throughput in general than a 7,200 RPM drive except recently some 7,200 RPM drives are even faster than the Raptors. So it depends on which 7,200 RPM drives you plan to use. See the drive charts for more info.

But I think what you were asking was in the nature of would a Raptor suffer from being 150Mbps drive in a RAID 0 configuration and the answer to that is no. The better question is pointed at the controller. Can it handle the raw throughput of 6 of any SATA drives. Then there is the "where is this data going" question and can that place handle the data and not become a bottleneck?


Hope this helps...............
 

Beep_Beep_Yeah

Distinguished
May 5, 2008
17
0
18,510
Hey Rozar - (or anyone else who'd like to comment on this) You said that "Since SATA is serial you have 150 or 300 MBps pipe PER PORT".... "No 7200 RPM drive will fill that pipe"

Why do reviewers on newegg.com recommend removing the jumper from the back of the new seagate 1TB drives so that they "run at SATA II speeds instead of SATA I speeds"?

People report getting much better speed from the drive after removing the jumper, but why is that exactly?

What is the purpose of it being there anyway?

Is it safe for the drive to remove it, and will it void the warranty on the drive?
 

rozar

Distinguished
Jun 7, 2007
345
0
18,780
Well I think its just a bunch of misinformation. There is no SATA I speeds vs SATA II speeds. There is 150MBps vs 300MBps (with 300Mbps being "one of" the SATA II extensions) speed of the link. Again a 150MBps drive could be considered "SATA II" if it has NCQ, because NCQ ia also one of the SATA II extensions. see the link in my post above.....

No hard drive made that I know of will do 150MBps although I think some of the SSDs are getting close. However the cache on your drive will go that fast. So if your system is reading something out of cache on the drive it will go slightly faster while it is reading the cache. So really as far as something you would notice there really is no difference. But technically it is faster while reading that small amount of data from cache. If that makes someone sleep better at night by all means remove the jumper. As long as your controller has this "SATA II" ability it sure wont hurt anything.

The reason it is there is because when the 300Mbps feature first came to market some first generation controllers had problems working with these drives. This wasnt the only problem either. SSC was a real pain in the rear too. If you check Seagate ships their drives with SSC disabled as well (download the SSCSET.exe utility to see). So its really about making the drives work with controllers. If your controller supports 3GBps and you want to use it, go for it.

As far as people saying that removing the jumper makes a "huge" difference. Ask them to post some tests to prove it.