I have decided to try a stripe 2 HDDs together, I plan 2 use my existing 120gig 180GXP IBM (8mb cache)& was thinking about trying to stripe with a Hitachi 7K250 of 120gig (8mb cache) both would be UDMA100 as SATA150 doesnt do much to be interested in yet... will this work or should I save up for 2 HDDS of the same size and model ???
Oh, read the manual & it says on the A7N8X-Deluxe, only SATA has onboard raid 0/1 but I do have a PCI Raid Card (Rated UDMA133)
Trust me I know what I'm doing... ooops, grab the cat...<P ID="edit"><FONT SIZE=-1><EM>Edited by marneus on 04/14/04 07:50 PM.</EM></FONT></P>
Should work fine, though I'd recommend partitioning strictly to get the most from such a setup. Also, a seperate h/d for the swap would help, unless you have scadloads of RAM of course, which makes it less important.
its more withdrawl symtoms... I havent done ANYTHING to my pc since christmas, XP is starting to go a bit creaky (11 gig somehow, thats just the OS & apps, games on its own partition is 40gig near full) so I thought that while I do a full rebuild, what could I do to my PC & I came up with RAID...
Trust me I know what I'm doing... ooops, grab the cat...
RAID is very nice. I had three 60-giggers in a raid-0 for a while, and noticed game levels loading very very quickly. Was always the first one into the next map in online games.
I think using two different brands is fine, but two different cache sizes is a little bit more iffy, so is different bus speeds (like one at 133 one at 100). Use good cables, stay away from rounded cables unless they use twisted pairs on the inside or, even better, have shielding like the Antec Cobra Cables do. If you have any other hard drives available, I'd recommend putting the operating system onto a single drive, and using the RAID volume separately. That way, any problems with the RAID drivers, bios upgrades, or disk failures in the raid-0, will not prevent you from booting your system. It's happened to me at least... upgrade the bios on the raid card, and bam it was incompatible with my motherboard all of a sudden, and wups all of a sudden I also had no operating system. It's just safer to keep the "special" disk as a separate volume that doesn't take the whole system down if it fails for any reason.
And stay away from Promise products. I could go on for a long time about my terrible experiences with them. Everybody else I talk to (except one person on this forum who's had a good experience w/ them) agrees, many of whom experienced the exact same symptoms I had on my system.
I'm using one of those converters in about the worst-possible-case-scenario that you could, and it's working great so far. Here's the scenario:
HighPoint RocketRaid 1820 (? - the 8-SATA one for PCI-X). Four Raptors plugged into it on direct SATA, and one old ATA100 PATA drive on an individual channel. The ATA100 PATA drive is in a "BR-IDE35" PATA hot-swap enclosure which converts the PATA connector to a special connector that is mechanically better for isertion and removal, and then back into a PATA connector on the back of the enclosure box. So there's some potential signal weirdness right there to start. Then, the SATA converter is plugged in the back of the enclosure, and then into the RAID card through a reasonably long SATA cable. I've tested the hot-swapping, and it works great. Haven't had any problems with it yet.
BTW it's a converter that came with an Abit NF7-S. I think it said "Antec" on it, but I didn't know Antec made converters so who knows.
I do know that the BR-IDE35 can be bought in a SATA version, and if you get that version you can get replacement HDD trays for it that will convert PATA to SATA from within the enclosure itself. I really like the enclosure, it is very functional and looks very nifty. (but maybe expensive compared to just getting converters.)
I didn't know you'd gone for the Highpoint 1820, how is it performing? I considered it when I bought my 1640, but at the time thought it was overkill. Looking back I could probably make use of the fact it utilises PCI-X (on a future mobo) considering all these Raptors I've accumulated.
On a slightly less relevant note, I was thinking about SATA enclosures and hot-swapping too since you first started talking about it, but seeing as I'm likely to be moving house in the near future I think I'll wait to see if there is a room I can use to house a hub and server and do it with with the server instead of my desktop.
I could be wrong about this, but I think you have to have a PCI-X slot to use it. (least I hope you do, I spent way too much money just to get that...) I haven't really finished testing its performance yet... I should load up an old raw NTSC AVI file and see if I can view it full speed now. I do know that I have a 3Ware Escalade 7506-4 installed with four 200G drives on it, and I had the Premiere install files located on that array and installed it to the Raptor array... Premiere installed itself in 5 seconds! I've had one person badmouth the HighPoint cards, saying they are crap and fail early, but he didn't have much to back it up with other than "someone I knew had one and it failed", and then he recalled that the PS blew up in that computer, and so did the video card, motherboard, hard drive.... so I don't know how this will turn out yet.
Right now, I'm wishing I had a gamer motherboard with PCI-X slots on it, because if I did, I could install my games across the network to the Raptor array, and by Sandra benchmarks I would get 100MB/sec from across the network for level loading. Unfortunately, a normal PCI gigabit ethernet card peaks out at around the transfer rate of a normal PATA hard drive, so unless I had a PCI-X gigabit card in the game system it would be pointless. (I'll just have to put up with loading levels at 1X speed... I'll miss having raid on my gaming rig.)
The enclosures are very nice. They have individual access LEDs for each drive, can be locked in place, and best of all if you work it right you can eject a hard drive while windows is running and put it in another system, or even put a HDD you want to do something to (like scan, or format, or maybe even install windows to) into a hot-swap bay, do your thing, then carry the drive back to your gaming rig and put it back in. One word of caution, the hot-swap bays are very long. My case kind of sucks on multiple fronts, but I had to actually get out the drill press and drill custom holes in the drive cages for these bays so that they could stick out the front and avoid running into the motherboard. That's partly because I have an EATX mobo too though. Some enclosures have fans built in to the back of them, especially SATA enclosures, which is nice. I've spent a lot of time looking for these things, and been through two brands now, so if you need some I can help you find them.
My recommendation, if you're building a gaming rig... get one with one of the new chipsets that supports P4 and PCI-X, or maybe a single-proc Opteron system, for that crazy memory bandwidth it has, and you will be able to take advantage of your RAID speed from anywhere in the house you plug in, long as you have gigabit hardware for it. (but of course don't sacrifice CPU power to get it - that's more important than faster level loads.) And don't do PATA RAID. I've completely given up on it. I wish I had never bought those PATA drives that are in my RAID-5.
Do you also have a PCI-X graphics card, as well as a hard drive controller card?
I have looked at the PCI-X standard, somewhat, and am a bit fuzzy on something?
It seems that the PCI-X bus is divided into several PCI-X channels that are all running at the same speed. When a card needs more bandwidth it sucks up, or plugs into, more channels and when it needs less it sucks up less channels.
So what I was wondering, can cards share the same channels, do you allocate channels in the BIOS as you can with PCI IRQ's?
I know cards can be made to have a "short PCI-X" card edge, or a full size one, but do cards ALWAYS use ALL of the PCI-X channels they plug into, or can you specify?
Also, I have not read anywhere that a PCI-X card will work with PCI. So I am, nearly positive, that you had to spend the bucks for your motherboard.
I have been waiting for awhile for PCI-X, and it seems this year I will get it.
So, from your post, you are NOT using any type of frame/carrier configuration for your Raptor drives?
I managed to find a motherboard with the Intel 7505 chipset that has both PCI-X and AGP, so I'm using a Quadro4 750XGL. I hadn't heard about PCI-X video cards... maybe I should look into this, because I can always use more monitors!! well, no, maybe two is enough... nahhh!
I haven't heard about that "channels" thing you're speaking of... I do know that if you look up a roadmap of the 7505 chipset, you'll see several independent ways for a PCI-X bus to connect to the processor. My system has two PCI-X busses, one that has only one visible slot and can run up to 133 MHz which is cool because that Highpoint card supports it. The other bus on my board is limited I think to 100MHz, and has two slots on it. If I were to, say, plug in a 33mhz card into one of those two slots, then the other slot would also have to run at 33mhz, but the independent 133mhz slot would still run full speed. I've seen boards with three independent PCI-X busses, but they were all over the $500 mark which was too much for me.
I can go into the BIOS and manually set the PCI-X bus speeds to either 33, 66, 100, or 133 (for that last slot only), but I don't recall any settings other than that. So I'm thinking, maybe you read about those independent things I've been calling "busses" to avoid confusion, and thought they were properties of each slot? Or it's equally possible that in my haste I brushed over my PCI-X standards reading too quickly and am unaware of the internal channels in each bus.
And yes, I think soon you will be getting your PCI-X, if not already... You could, at very least, get one of those Iwill DH800 boards that supports an 800MHz FSB Xeon, and only get one Xeon for it, and it would be as fast as an equivalent P4 system, maybe faster due to the really beefy busses they put between everything on a pro board. I do know that on this board (Supermicro X5DAE) the built-in gigabit LAN card is connected to the 64-bit 133MHz bus along with that Highpoint card, so while my other computers are limited by their PCI busses, this computer is capable of pulling the full 1000 megabits both upstream and downstream. I'm really wanting a PCI-X board for my gaming rig now so that I can have 100 MB/sec RAID performance loading game levels without even needing a hard drive in the gaming system at all!