Sign in with
Sign up | Sign in
Your question

PCI bus speed?

Last response: in Components
Share
June 6, 2006 7:41:14 PM

I have two questions so far. Question number one. Could anyone could tell me the actual speed of any device that is connected to the PCI bus. What I have found so far is that its 33 MHz which is 133MB/s, I think. I am not sure if it is 133 MB/s or Mb/s so if someone could clarifiy that for me that would be great.

Question two, would that be faster than Gigabit networking. I am looking to build a NAS but I am not sure which route to go. I could buy Raid 5 NAS prebuilt just add my drives and go but that seems to be a bit more expensive and a bit more of a hassle to find a BYOD NAS that supports raid 5. Most do not even seem to preform that well with THG's tests. My other thought was building a computer with 4 drives. Using the program Freenas, and just having the computer sit in the corner running 24/7. The last and only thing I need to find out is if the PCI bus with a 4 SATA port controler would be about the same speed or better than a gigabit network. If it is slower than i would want to get the best speed I could by using PCI-E, which yet again is more expensive. I know that megabit network speed is somewhere around 12.5 MB/s so I am thinking that gigabit would be 125 MB/s. If that is the case then the PCI bus of 133 MB/s would be just perfect because the network would be running at max speed and I could use the cheaper PCI card for my NAS and not have that as a bottleneck.

Thanks in advance for any information anyone has.

More about : pci bus speed

June 6, 2006 8:03:38 PM

PCI = 33Mhz x 4 bytes (32 bits) = 133MB/s

gigabit network is, what, roughly 100MB, after you account for tcp/ip header and trailer info, etc.

Now, forgetting about that, what would probably effect you more is what kind of machine it was going it, and more correctly what kind of raid controller you were going to use. If it was an inexpensive controller, you would probably be hindered more by the controller than by the pci bus.

There's more to it than just raw bus or raw network speed.
June 6, 2006 8:06:15 PM

yes, pci is 133MB (megabytes) per second, and as far as i know, it is shared among other pci devices.

for the second question, i would to the nas myself. You can do it on linux, for example, using samba and mapping the nas on the windows machines. You can use any board with on-board sata, and i think the sata channels are connected directly to the southbridge, which are connected to the northbridge using some dedicated bus, then you dont have to rely on PCI. Then you can add a gigabit card (if the board doesnt have one already) which will be alone on the pci bus.
Related resources
June 6, 2006 8:32:23 PM

I bought a NSLU2 about 1 year ago and it has been fairly good until about a week ago. The drives seem to go into some type of sleep mode where it takes about 1 min for the nslu2 to access the data. I have the drives map through windows XP sp2 and never had a problem before. I also had a wd2500JB fail on me. Not entirely just corrupted a lot of my songs, video clips, and backup files. I hope my current WD3200JB does not have the same thing happen. Thats why I want to goto some type of raid for data protection. Raid 1 is great but at the cost of an entire hard drive I do not believe that would be worth it. I want to go with raid 5 because of the data protection and the incressed proformance. So the deal for now is I would be using a 10/100 NIC in a celeron 466MHz HP computer with 256 MBs ram PC100 and then some set of sata controler and hard drives. With the Freenas program I could then upgrade the computer because it uses software raid 5 without losing my raid setup between a celeron 466 and a athlon 64 3700. I was thinking about just a cheap 4 port sata controler without raid support and (4) 300GB+ hard drives. That way I will have all the space I will ever need *brings back memorys of the first GB hard drives and the same saying* and can upgrade when i have the money. Maybe I am thinking about all this too hard and I should just go with something simple like intels raid 5 NAS or the Yellow Machine. They seem like worth while products but from the reviews they dont offer everything im looking for. I believe that the intel is 10/100 only and the yellow machine has poor proformace on gigabit. Thats why I was looking at a full computer system.
June 6, 2006 8:41:53 PM

For me, I don't rely solely on an redundant disk as a backup. What if you have a dual disk failure in your raid 5 array? (I've seen it). In my case it was due to a backplane, not dual disks, but the array was gone non the less. You better have a backup of your data if you want to recover. Maybe you have another raid setup, 5 + hot spare, or some other array. Then what if your house burns down. Are you happy with taking that risk? If not, you need a tape backup, which may or may not be overkill for your particular need.

As far as what to do, unless you build a machine with a very expensive scsi array, or with a less expensive controller and a big CPU to off-set the software controller, then personally, I think your thinking of trying to avoid saturating a gigabite network is sort of displaced.

I guess it comes down to what you are really going to be moving around on this box, how many people you plan to serve, and how much money you want to pour down the hole in the floor. :wink:
June 6, 2006 8:59:27 PM

Thanks for the info, and you are right. I am not trying to server a lot or have a huge file server its just for my family of 5. They all rarely use it so its mostly just me, using Deamon tools to mount a game image, listening to my MP3 collection, or watching a DVD image. I just got very frustated when half my MP3s got corrupted and some of my movies from my wd2500jb. I just never wanted that to happen again. It really sucks dowloading a gig on 56K :p  so thats why I try to save everything I download such as all my drivers, updates, patches, games, movies, and music. I am just looking for some data security, that is able to fully use gigabit, such as raid. Do you think that it would be better to just buy a hardware raid card that supports raid 5. If I do that then I will have to buy a PCI-E motherboard with a 939 or 775 CPU with some more RAM and the hard drives. Just figured that it would be a lot cheaper to use an old HP computer and just throw in a $40 (4) port sata card with (4) 300gb hard drives and use software raid until I get a gigabit PCI card or totally overhaul it with a hole new computer.
June 6, 2006 10:08:37 PM

From what you have said, I think you are more concerned with storage amount and redundancy, and not necessarily network speed. I think you could go with less expensive hardware, and sacrifice a little bit on the network speed.

For example, I think if you build a server with a gigabit network card, but perhaps don't spring for the most expensive hardware raid controller or brand new PCI-E motherboard, then your network speed is going to suffer a little bit while the the processor and controller catch up with processing the data. But, at the end of the day, your network might be 2X 100base, instead of full gigabit. But, so what? It's probably not like you intend to move 20 gigs of files every day.
June 7, 2006 12:40:50 PM

Pain, do you think that a Celeron D 320, 2266 MHz, would be able to run a software raid 5 and a gigabit network card or would I need something more powerful than that?

Also would the amount of RAM make a difference? I am not talking about 128-256, but 512 or more. Would 1GB be overkill?

Lastly, can someone recommend a sata controler card? It does not have to have raid 5, but atleast 4 ports. Unless the Celeron D is not enough CPU power to do a software raid 5. I doubt that it will make a difference if I use SATA 1.5Gb or SATA 3.0Gb because over the network, with the raid 5, the SATA 1.5Gb should not be much of a bottleneck. My motherboard only has PCI slots so a PCI-E card is out. I just can not seem to find a hardware sata card that supports raid 5 on PCI. That is why I was thinking about my Celeron D to handle the work load.
June 7, 2006 1:00:34 PM

I don't really know, but let me first clarify something. When I was saying software raid, I mean a controller that did the raid function in software like a highpoint 1640 for example, i.e. not a hardware raid controller. If it were me, I'd still use a raid controller and not do the raid within windows, if that's what you mean to do.

I think the 2G celeron would do OK, but that's a guess, and 512M of ram would probably be OK too. I doubt it would add any performance increase going to 1G though it wouldn't hurt.

You could of course always use linux, which I can not offer any advise with since I've never used it, but it will be less resourse hogish...but of course you'll have to learn linux if you don't already know it.

I think pci-x cards are backwards compatible with pci, so you could use a card like this LSI logic:

http://www.newegg.com/Product/Product.asp?Item=N82E1681...
June 7, 2006 1:41:07 PM

do you have the drives already? coz if dont, as i said, you could implement your nas in linux. Linux has a software raid that works on any drive, including IDE. That means you wont have to buy the sata controller.
You can implement a raid5 array, and for that the celeronD 512 would be more than enough
June 7, 2006 2:11:14 PM

Actually, I've been planning on setting up a NAS device for the same reasons as you. Here's some info to consider:

Regular PCI is (1) synchronous and (2) assymetric. This means (1) that only a single device can talk on the bus at a time, and all other devices are forced to wait to communicate until the device using the bus stops. If the talking device is faulty, or hangs, then the bus goes unutilized until the device releases the bus. This also means that (2) devices cannot read and write at the same time. It's one way or the other.

In other words, the PCI bus is REALLY bad for anything bandwidth intensive. If you think to yourself that "I just won't put anything in my PCI slots other than my storage controller," keep in mind that many peripherals built on the mobo still use the PCI bus, such as the integrated sound, firewire controller, USB, network, etc. Be sure to check the architecture docs of the chipset your motherboard uses, and the BIOS to make sure you can disable those components. The Intel i850 introduced CSA, which streamed data right to the chipset over dedicated lines without using the PCI bus for ethernet and storage. Again, check your architecture docs for your chipset if performance is that important.

The ICH5r (and higher) southbridge by intel has a hardware RAID 5 capability built-in (4 SATA drives, depending on mobo mfr BIOS support). This is probably your best bet. The storage data goes right into the southbridge, and then over the DMI bus to the northbridge for the most direct route possible. I'm not sure how well the performance is since the hardware controller is the southbridge, but unless you plan on doing a lot of random writes (and it doesn't sound like you are), then performance won't be an issue and this is going to be your best bet from both a performance and economic standpoint.

Keep in mind that the RAID 5 bottleneck is parity calculation, which only occurs when you're writing data. This is usually supplemented by a XOR engine (a seperate chip). Reads are usually really fast since data can be streamed from multiple drives simultaneously and buffered. If you're looking for a reliable archive computer, you shouldn't be hitting any real bottlenecks. Unless you invest in a nice adaptec adapter ($400+), you're not going to see any real performance difference between cards. I doubt you'd notice anyway, considering your application.

As for Gigabit ethernet, your RAID 5 array probably won't burst higher than 100 MB/sec anyway. With all of the buffers between your hard drive and your networked computer, unless you're moving a lot of small files (<1 MB), transfers will be noticibly fast.

In short, a BYOD NAS RAID device with about 750 GB of total [secure] storage will run you about US$800 new if you choose ICH7r over gigabit ethernet. Intel Matrix RAID 5 has been around for 3 generations of chips now, so it's a stable, cheap, reliable solution. You can go up to 2.25 TB (3TB before parity striping) for about $2400.
June 7, 2006 2:18:21 PM

Quote:
do you have the drives already? coz if dont, as i said, you could implement your nas in linux. Linux has a software raid that works on any drive, including IDE. That means you wont have to buy the sata controller.
You can implement a raid5 array, and for that the celeronD 512 would be more than enough


thats more or less what I had. I used to have some linux distro in my PI 200 MHz-isch with like no ram an Raid 5 HD's SATA and IDE mix
June 7, 2006 2:19:24 PM

I would use the program Freenas. I believe its still beta but it works quite well ( www.freenas.org ). I want to go with SATA because for raid 5 to work you need each HD on its own channel. When using IDE raid 5 and you have only 2 channels putting two hard drive together it is fine until one fails. Then the other one fails because of the master/slave setting. That is what I read and what I found when I setup a test system with (3) 40GB HDs. Since Freenas has its own software raid I had figured that I would not have to include a card with its own raid hardware/software, and just because it would be cheaper going with just a SATA controler. Then I can use just PCI and would not have to get a whole new computer setup.

I have never worked with linux any more than installing freenas or putting in a knoppix CD. Freenas works well and is very simple to setup. I would recommend it to anyone who wants to build a nas.

I do not have any drives yet. I am waiting for some less than 30cents/GB HDs to show up on slickdeals.net.
June 7, 2006 2:36:04 PM

That thing is awesome :)  Runs linux, too ;) 

Personally I was going to drop a board into an HTPC Case and customize the VFD display to show # of connected users, free space, RAID faults, etc. The system will be Wake-On-Lan and will automatically suspend-to-ram if no one accesses the drives in 20 minutes.

Plus, the Intel entry-level NAS device is too easy. You just plug it in and it works? Where's the fun in that? :) 
June 7, 2006 4:44:57 PM

Well thank you all for the great information. It was a big help. I guess that "I'm damned if I do and damned if I don't". That intel chipset does look like it would be very good for what I need but the only prolem. My celeron D is not a socket 775. The motherboard, cpu, ddr2, and 24 pin powersupply, will cost about $250-300, minimin, that is without hard drives. Which it does not look like I will be able to get by with much less than that. All of the raid 5 pre-setup NAS devices all seem to be around the same price or a lot more. With the information from Whizzard I do believe that going with just PCI will be just a big pain in the arse. I do love the idea for the HTPC case with VFD display. If you can do that leave me a PM, I would love to impliment that into my NAS.
June 7, 2006 5:42:42 PM

Thanks. Will do.

I'm talking to a couple of VFD manufacturers (still haven't heard from thermaltake :( ) about developing a NAS 'watchdog' application for the VFD running on windows. It'll likely go on SourceForge unless the Manufacturer decides that they want to bundle it. The VFD manufacturer for SilverStone (they make nice HTPCs) doesn't support 3rd party developers.

The one manufacturer I'm talking to has a VFD that fits in a 5"1/4 drive bay. I'll let you know when it's done.
June 8, 2006 6:07:04 AM

Before you get fancy with RAID and gigabit, I think you should start off by treating your drives well, and this means having a good PSU, drive cooling, and decent drives to start. For the PSU, you should budget around 25W@12V per drive start up power. An online PSU calculator can help you judge the entire computer.

Ensure that your drives will have some airflow going over them. Ideally a nice 120mm, but not too loud 80mm's also work fine. I'd aim for low 30's in drive temp.

RAID 5 is great in theory, but difficult to optimize in practice. Single drives can easily sustain 30 MB/s over consumer gigabit (with modern CPU's), which seems to be as much as any inexpensive NAS can, even with bigger RAID arrays, and this is around a 3x leap over standard ethernet, so well-worth the effort in general. Don't go RAID + fast ethernet, go single IDE + scheduled backup + gigabit instead if you have to compromise.

Good RAID 1 implementations should have striped performance for reads, and are better supported and easier to get work with than RAID 5 arrays.

So here's one possible setup: Using 3 big drives, e.g. 300 GB each. 1 (perhaps PATA) for boot partition, optional 2nd partition for key programs + data, 3rd partition as backup (say >250 GB) (you be the judge in the details of partitioning – this is just a sample setup). 2 other drives in RAID 1, backed up (perhaps with filtering / compression) on 1st drive.

This will require less performance tuning and hardware capability, and still go a good distance towards performance and reliability – better than any single NAS or RAID 5 setup discussed so far. Performance will not be as high as a big DIY RAID 5 setup, but will be a lot better than fast Ethernet, and not a lot worse than what you’d probably get with a single big RAID array over the network; probably just fine for your applications, and pretty good for backups.

Later on, you could expand in-place in this way for example: Adding 2 more 300 GB drives, you could go RAID 5 x 3 ~ 600 GB for your main array, and use spanning for the backup ~ 550 GB.

Of course you can go bigger still by separating storage and backup onto separate machines down the road.

Consider also that by the time you need to expand, consumer NAS's might come closer to affordable DIY RAID in performance, and so be reasonable alternatives, esp. for backup. By that time, bigger single drives might also be more affordable alternatives for backups. But of course, most people would tend towards a higher-performing new main server, and probably want to re-use the old server for backups.

Here’s a MB that a quick search on Newegg turned up. I have no experience with this MB/chipsets, so can't comment on Linux support or performance details.

http://usa.asus.com/products4.aspx?modelmenu=2&model=17...

Socket 478, gigabit, SATA x 4, PATA x 3. DDR, AGP.
June 8, 2006 12:26:18 PM

I was going to use my Antec True Power 480. I have plenty of spare fans laying around. My father has an Antec case which has (5) 3.5in. slots inside. All with rubber mounts and space for a 120mm fan. He uses (2) WD360JD Raptors and (2) Seagate 80 GB satas for backup. The drives all stay very cool to the touch, even while doing big transfers or lots of file access.

I would rather go with Raid 5 because I would not be losing 50% of my space. With Raid 1, sure it is close to 100% data security, you lose half the drive space. One hard drive is just for the mirror so you do not get to utilize it. With raid 5 I believe you only loose the space of one hard drive no matter how many drives you have connected. That means the more drives I include, the less of a total percentage it lost to parity bits.

My goal is to reach 1TB, or more, and having transfers that are fairly good. I am not trying to reach 100MB/s just 30-40MB/s sustained. BTW I only have a Trendnet 16 port switcher 10/100. The only reason I would like gigabit is for the future. I have no need for daily backups, just lots of downloads. Like movies, games, music, updates, patches, expantions, drivers, linux distros, and just about any program I need when freshly formating my computer. It is very nice to just install Lan drivers and then have all my instant messaging programs, music players, game images, movies, and anything I want to install right there without having to download off the internet at 5K/sec. I never store anything of value on my gaming computer, I have seagate 80GBs in raid 0. I lost files before that I had stored on my raid 0 and thats when I went to the NSLU2. Which worked well, until my 250GB started corrupting files. Now I had my 320GB hard drive and just waiting for that one to go bad. *crosses fingers*

I do not understand what you mean by

"Later on, you could expand in-place in this way for example: Adding 2 more 300 GB drives, you could go RAID 5 x 3 ~ 600 GB for your main array, and use spanning for the backup ~ 550 GB."

With raid 1 those (2) 300GBs would be 300GBs total, and would that even work to move a raid 1 to raid 5? Still even if it did then why not just get the raid 5 in the first place. Just the cost of 1 more drive and better write speeds. Not that the speed matters that much.

That motherboard is pretty close to what I might want. It has just about everything but the SATA ports are 2 on the chipset and 2 on the other IC. Can I do raid 5 with that? Also no onboard video, then when I install freenas I would have to put in some videocard. Just be easier if it had onboard video.

My friend mentioned that I should look into raid 10. From my understanding that would still be a 50% loss of drive space, but it would just make up for raid 1s lack of write speed. Still not what I would be looking for.
June 8, 2006 1:50:21 PM

Quote:

I do not understand what you mean by

"Later on, you could expand in-place in this way for example: Adding 2 more 300 GB drives, you could go RAID 5 x 3 ~ 600 GB for your main array, and use spanning for the backup ~ 550 GB."

With raid 1 those (2) 300GBs would be 300GBs total, and would that even work to move a raid 1 to raid 5? Still even if it did then why not just get the raid 5 in the first place. Just the cost of 1 more drive and better write speeds. Not that the speed matters that much.

That motherboard is pretty close to what I might want. It has just about everything but the SATA ports are 2 on the chipset and 2 on the other IC. Can I do raid 5 with that?


Another way to look at the 2x raid 1 vs. 3x raid 5 is that both cost you a single drive for redundancy. I suggested a smaller setup to start to reduce cost, to which you seemed sensitive. With a backup, it doesn't matter whether or not the Raid 1 converts to raid 5, you can convert between raid formats as often as you'd like. A backup solution by design is highly recommended for this and other reasons. Raid is not a backup.

Raid 1 allows breaking the raid array, making it essentially two simple drives, and some raid 5 implementations allow you to extend a simple drive to a raid array. Don't count on it however without checking the implementation.

If you have an OS-level software raid, then the drive interface doesn't matter -- you can even combine PATA and SATA drives.

You seem to have the power & cooling well under hand. Good luck with your other choices.
June 8, 2006 2:07:23 PM

I think it boils down to what you really want. Here's an overview of the different RAID configurations:

RAID 5

PROS: Real-time data reliability. Easy to span multiple drives. Offers greatest size capacity (anywhere from 3-16 drives with capable hardware)

CONS: Slowest of all configurations (still fast than a single-drive setup).

SIZE: (S * N) - S (where S = drive size and N = number of drives.)
4x500 GB drives will yield 1500GB uf usable space, before file system.


Writes require a parity calculation, and this is often handled by interrupts to the processor or software. The lack of a sizable buffer in consumer-level RAID hardware means that reads can't be striped as good as enterprise-level hardware. RAID 5 provides good read speed with good write speed for larger files, and poor write speed for random file writes.



RAID 10 (Actually RAID 1+0)

PROS: Offsers best speed of all configurations. Good for random writes

CONS: 50% of the drive space is unusable (utilized by RAID). High cost per GB. Capacity limited (maximum 4 drives)

SIZE: (S * N) / 2 (where S = drive size and N = number of drives.)

RAID 10 is shorthand for RAID 1+0, which means you're striping two mirrored sets. This is the "Best of both worlds" approach where you benefit from the speed provided by striping, while maintaining the reliability of mirroring. For advanced hardware that's smart enough, you can read from up to 4 drives at a time for some SERIOUS burst rates. The downside is that you're limited to a 4-drive configuration, and you lose 50% of the space on those drives. RAID 10 is good if you're looking for more capacity than a mirrored set can offer, but require better reliability than data striping alone.



To answer a couple questions:

- Losing a drive in RAID 5 or RAID 10 doesn't cost you any space, free or otherwise. If you lose a 2nd drive, however, your entire array may become unusable, so make sure you replace a bad drive ASAP. More advanced cards have something called a 'hot-spare', but that doesn't apply here.

- PATA is horrible for RAID. Don't even consider it. You'd be better off software RAID'ing two external USB drives.

- RAID array performance depends heavily on the hardware and buffer size, but as a rule of thumb: RAID 0 (striping) reads/writes can be up to 66% faster. RAID 1 reads up to 100% faster, and writes are the same as single-drive solutions (or close). RAID 10 offers great read speeds with good write speeds at the cost of capacity. RAID 5 offers good (great on enterprise hardware) read speeds and poor-to-acceptable write speeds.

- Don't worry about your power supply. Most drives require 8W. Even the hungry raptors only require 12W. Unless your drives are stacked on each other, passive cooling will suffice. There's no need to add additional cooling unless your drive says it requires active cooling (i.e. Raptor).

- You can span multiple drive controllers and use software RAID, but it's a pain to administer and is MUCH, MUCH slower than a hardware solution; even a cheap one.

- If you can avoid it, don't jimmy-rig drives together and then 'upgrade' to RAID. Creating a RAID array obliterates the existing data on the drives, meaning if you have 300GB of files and 400GB of space, and you're looking to add another drive, you have to move all 300GB of files to ANOTHER drive, create your array, then move them bak. You're better off just doing RAID 5 the first time.


Hope this helps.
June 8, 2006 2:15:43 PM

Quote:
With a backup, it doesn't matter whether or not the Raid 1 converts to raid 5, you can convert between raid formats as often as you'd like.


Completely false. RAID arrays cannot be converted. They have to be destroyed (with all data going with it) and re-created. Yes, you can break a mirror, but you can't convert from RAID 10 to RAID 5, or vise-versa.

Quote:
Raid is not a backup.


It is for a lot of people unless you can afford $1,000 for a tape backup and $50 a tape.

Quote:
and some raid 5 implementations allow you to extend a simple drive to a raid array.


Not sure what you mean here... all RAID 5 arrays work the same. You have 1 drive set aside for parity. RAID 6 lets you have 2.
June 8, 2006 2:24:39 PM

It's really wonderful when you encounter a wizz who knows it all, including the performance of every sw raid vs every hw raid and can declare statements true false based on his own perfect understanding of semantics. I leave you to his expert guidance.
June 8, 2006 2:39:11 PM

I didn't mean to offend you, Madwand :( 
June 10, 2006 1:35:35 PM

Quote:
Completely false. RAID arrays cannot be converted. They have to be destroyed (with all data going with it) and re-created. Yes, you can break a mirror, but you can't convert from RAID 10 to RAID 5, or vise-versa.


Carefull when you use the word "completely". Some RAID vendors offer online RAID conversion as part of their featureset. Admittedly it is not a typical feature on PC based RAID solutions.

Some RAID implementations let you do this (where parens show concatination):

RAID10 => (AB) (CD)

RAID0 => (AB)
RAID5 => (AB)C
RAID5 => (AB)C(D)
June 10, 2006 8:04:14 PM

Quote:
Completely false. RAID arrays cannot be converted. They have to be destroyed (with all data going with it) and re-created. Yes, you can break a mirror, but you can't convert from RAID 10 to RAID 5, or vise-versa.


Carefull when you use the word "completely". Some RAID vendors offer online RAID conversion as part of their featureset. Admittedly it is not a typical feature on PC based RAID solutions.


The concept of RAID conversion is futile, though, because in every case you lose all fault tolerance during conversion. If the conversion fails, you lose all of your data. Also, if your primary partition is on the Array, you can't convert it via software. Even if just adding a drive to a RAID 5 array, your parity needs to be redistributed and you still lose fault tolerance during 'conversion'.

Either way, if you convert or destroy/move, you NEED to backup your data, which is why you don't see hardware supported conversions from the major vendors such as Adaptec. Why would you convert if you can destroy/re-create faster? All of the RAID conversion I've seen out there is open-source/freeware style, which isn't something I'd trust.

So yes, I agree you can convert RAID arrays, but the applications are unreliable, and few and far between, and there are some conversions that are impossible (Such as RAID 5 to RAID 10 - I had it backwards in my previous post).
June 6, 2009 3:21:28 AM

here's a question for you...

if PCI max bandwidth is 133MB/s, why are there PCI SATA RAID controllers supporting SATA II @ 300MB/s? I actually just got suckered in to buying one of these PCI cards because I didn't think I had a free PCI-x slot until I saw the dinky PCIx1 slot on my board when I opened the case.
Now HDTach reports my 7200 RPM 300gb/s drives tapping 130MB/s sustained reads thanks to the PCI bus... :na: 

I've been out of the game too dang long.
!