Raid on drives of different sizes-Is it possible?-Pros/Cons

hcforde

Distinguished
Feb 9, 2006
313
0
18,790
I have 2 Cheetah's an 18 gig and a 9 gig. Both are in the same family so all the other specs are the same.

I have a 2 channel U320 raid controller in a 32 bit slot

Can i raid these?
If so, would the extra 9 gig space on the 18 gigger be wasted or could it be used as a seperate drive?
If i can raid them what would it do to performance? These are SCSI 3 drives (U160)

Is it best to have raid on one channel or across both channels
 

choirbass

Distinguished
Dec 14, 2005
1,586
0
19,780
for a raid array, the capacity is the sum of the amount of drives, multiplied by the capacity of the smallest drive in the array... so, for 2*18 GB drives, and 1*9 GB drive... you would end up with 27 GB total that are usable, if theyre all in the same array... capacity at that point, also depends on which type of raid array you would decide to use... for raid 0, its honestly best to use identical drives for an ideal scenario

...in all honesty though, i wouldnt use the 9 GB drive in the same array as the other two anyhow... unless you dont mind losing capacity, speed between the 9 and 18 GB drives should be fairly similar i would think though, not too much difference

raid 0 benefits are soley its speed advantages,... data throughput speed is fairly linear for each drive you add because the data is striped among all the drives (eg, 1 drive maxes out at 100MB/s transfer rate, 2 drives max out at 200 MB/s transer rate, 3 drives max out at 300 MB/s transfer rate, up to as many drives as you are capable of adding... ideally, throughput performance would be perfectly linear like that)... the downside for speed though, is your access times are longer, because it has to search for the data among all the drives in the array, because the data is striped among all the drives (so it may not be ideal for a windows boot partition)... and theres no redundancy then either... if one drive becomes unreadable, the array will fail to work, and your data will then be inaccessible (some people consider that a higher possibility of failure, but honestly, having more drives in a raid array, doesnt make any particular drive any more likely to fail, than if it was a single drive, you mainly just have more drives to be concerned with)... if one of the drives is disconnected from the array however, the array will fail to work, until the drive is reconnected again (assuming the drive doesnt get erased or damaged in that time for whatever reason)

raid 1 has the benefit of mirroring your data, so that if one of the drives fails (or is disconnected for that matter), your data is still safe on the other working drive, and can continue using it just as if nothing happened (and can be rebuilt to another drive even, thus restoring your raid array to having a back up drive again)... the downside, is your total capacity for the 2 drive array, is halved, 2*18 GB hdd in a raid 1 array, will then become 18 GB for the array, not 36 GB... the speed of raid 1 is about the same as a single drive, (though in raid 1, the write times are slower, because it has to duplicate the exact same data to the back up drive in the array)

those are the 2 most common raid arrays anyhow

partitioning your array works the exact same as if it wasnt raided at all... you have a set capacity to work with, and the option to create numerous partitions within that array

and, as always... having your data backed up elsewhere is always a good practice to exercise regularly anyhow, either via a seperate internal drive not in the array, an external drive, a networked drive, or even an optical media such as dvds/cds

its better to have each drive on its own channel, to minimize throughput bottlenecking... over a 32bit bus, youre limited to a maximum of 133 MB/s total, so with overhead, thats about 127 MB/s max

anyhow, hope that helps some
 

PCcashCow

Distinguished
Jun 19, 2002
1,091
0
19,280
Since it seems you new to SCSI, you should know that a two channel u320 card is pretty good. Everything Choirbass said is true, but I just would like to suggest getting set of u320 drives, you can find them very cheap refurbished or used on ebay. However many guys and gals forget to take note what type of drive it is they have. If you have a 68pin drive, you have no problem, BUT, if its 80pin SCA, then you'll need a converter to allow for correct cabling and power in an ATX case. Also while you are limited to the bus, the internal transfer rates for those drives are not burst, they can (be sure to cool them) sustain longer durations of 320 than ide and most sata drives. Pass along the models of each drive and the controller card and I or someone could post a good setup. For me I have my base OS and apps on two u320's, raid 0. And four drives 146 u320 drive in a raid 5. (It's a monster I know, but fast fast!).
 

Pain

Distinguished
Jun 18, 2004
1,126
0
19,280
I thought the size of a raid0 array is the size of the smallest disk times the number of disks. So, in this case the array would be 18G.

1, 18G disk and 1, 9G disk. 2 x 9 = 18.

ADDED: Oh, I see what you said, my mistake. He only has 2 disks though, so 18G total. I thought you said the sum of the size of the disks, not the sum of the number of disks, multiplied by the smallest disk. :oops:
 

choirbass

Distinguished
Dec 14, 2005
1,586
0
19,780
yeah, that was my bad too... i misread and assumed he said he had 2*18 GB hdds and 1*9 GB hdd... instead of only one of each (which would make alot more sense to work with a 2 channel controller lol)
 

hcforde

Distinguished
Feb 9, 2006
313
0
18,790
Thanks for the info so far.

I am not new to SCSI but am returning to it after a few years absence. I ned it for a very specific purpose. I am a foreign currency trader with a bank of 6 monitors on one machine. When I want to change the data from 1 bank to another bank of information it takes about 45 seconds for the monitors to refresh with the new information. TOO LONG!!! This machine will be dedicated to running this one program for this one purpose only. I have changed from AGP and PCI video cards to PCI-Express and one scsi drive in the system with a 29160 adaptec controller and the time has gone down to 8 seconds. With a cacheing raid controller I figure 2-3 seconds will be my target.

I did look on ebay and am bidding on a number of sets of scsi drives. The small capacity 320's are going for dirt cheap. Perfect for me as I do not need capacity.

Cooling is being looked into. I remember scsi getting hot in years past but at 15,000rpm cooling is a top priority. Coolermaster has closed out the cooldrive 3 and they are being sold at compusa for $15.00 each.

Directron has some devices that will fit into a 2/3 (5.25" bay) area that included a fan and would cool up to 3/4 (3.5" HDD drives).

www.directron.com/tcistorm7.html $28.00 Will probably only cool 2 drives at a time.
www.directron.com/stb3t4e1.html $19.99 I am considering investing in a CM Stacker anyway.

If my calculations are right running 2 320's in a stripe 0 configuration in a 32 bit slot even with a 320 Raid controller will max out my bandwidth of a PCI slot. I will not see any better performance if I add more drives to the RAID 0 setup. Am I correct in this line of thinking?
 

choirbass

Distinguished
Dec 14, 2005
1,586
0
19,780
If my calculations are right running 2 320's in a stripe 0 configuration in a 32 bit slot even with a 320 Raid controller will max out my bandwidth of a PCI slot. I will not see any better performance if I add more drives to the RAID 0 setup. Am I correct in this line of thinking?

yeah... for a standard 32bit pci slot, your throughput performance will encounter a bottleneck with 2*15k RPM, adding additional drives on the same bus will bring no performance gains, since ~127MB/s is the most you would be capable of...

consider that even 2*36GB 10k RPM Raptor SATA HDDs in raid 0 will saturate the 32bit PCI bus almost completely

but, aside from the throughput performance, which will be maxxed out on the 32bit bus... the scsi drive access times should be very low

...(this will involve additional spending if youre open to it) but, if its feasible, you could invest in a PCIe 1x scsi controller (i dont know which controllers are available, or how much they would cost)... but, if your motherboard has any available PCIe 1x slots on it... that could be the way to go... and you could possibly find a controller with more than 2 channels even... your available bandwidth would be about double that of 32bit PCI, making it a 64bit interface... ...each PCIe slot i *think* has its own available bandwidth too... so for PCIe 1x, you would get 264MB/s per slot then, i think, if im wrong, someone should correct me

- 1 32bit PCI slot has 133MB/s max throughout the whole bus, nomatter how many slots you have... so youre forced to share bandwidth between slots, and can very easily run into a bottleneck

- 1 PCIe 1x slot has about 264MB/s total bandwidth
- 2 PCIe 1x slots have about 538MB/s combined total bandwidth
- 3 PCIe 1x slots have about 802MB/s combined total bandwidth, and so on

...i hope im right... i hate being unsure about things like that, lol

but, if its not feasible to invest in such... ~127MB/s is your limit over a 32bit pci bus
 

choirbass

Distinguished
Dec 14, 2005
1,586
0
19,780
I dont think it would be possible but this would be nice :)

[Raid Disk1 Partition 9GB Drive]
[Raid Disk2 Partition 9GB Partin][Unraided logical partition]


yeah... for hardware raid, i dont think that would be possible... cuz the physical drive is actually connected to the raid controller, using the full capacity of the drive, and if possible, the whole capacity of the drive gets used (unless youre mixing drives of different capacities, then one or more get shortchanged basically)... ...for software based raid though (raid 0, 1, 5), im sure its very possible what youre suggesting, because you can create, manipulate, and raid multiple partitions of different sizes from within the os (but the os partition isnt selectable in a software based array, only seperate partitions can be used, with a maximum of 1 partition per physical drive for use in a single array... ...a seperate software array would require seperate partitions, 1 per drive... dont know if that makes any sense the way i explained it though)... ...but, for performance, stability, and security, software raid definetly isnt the best solution (though its certainly the cheapest, lol)...

such as what you originally put:

[Raid Disk1 Partition 9GB Drive]
[Raid Disk2 Partition 9GB Partin][Unraided logical partition]

changed to:

[Software Based Raid Partition 9GB Drive]
[Software Based Raid Partition 9GB Partin][Unraided Windows partition]

and, that should work
 

JonathanDeane

Distinguished
Mar 28, 2006
1,469
0
19,310
yeah... for hardware raid, i dont think that would be possible... cuz the physical drive is actually connected to the raid controller, using the full capacity of the drive, and if possible, the whole capacity of the drive gets used (unless youre mixing drives of different capacities, then one or more get shortchanged basically)... ...for software based raid though (raid 0, 1, 5), im sure its very possible what youre suggesting, because you can create, manipulate, and raid multiple partitions of different sizes from within the os (but the os partition isnt selectable in a software based array, only seperate partitions can be used, with a maximum of 1 partition per physical drive for use in a single array... ...a seperate software array would require seperate partitions, 1 per drive... dont know if that makes any sense the way i explained it though)... ...but, for performance, stability, and security, software raid definetly isnt the best solution (though its certainly the cheapest, lol)...

I learned about Dynamic Disks in school and they didnt like such a good thing (its that kind of like software raid ? its been a while and active directory has permantly damaged my brain lol) maybe JBOD ? Do they do that in hardware ?
 

choirbass

Distinguished
Dec 14, 2005
1,586
0
19,780
I learned about Dynamic Disks in school and they didnt like such a good thing (its that kind of like software raid ? its been a while and active directory has permantly damaged my brain lol) maybe JBOD ? Do they do that in hardware ?

converting your partitioning from Basic to a Dynamic Disk really isnt necessary... unless you plan on using software raid of some form (converting the partition from basic to dynamic is what you would need to do though, in order to even take advantage of software raid)... ...plus, if you convert your partition to dynamic, you cannot install a seperate operating system on that same partition (im alittle hazy on what the downside to dynamic disks are, but, it has alot to do with not being able to access the data outside of the host os, because i *think* the partition is no longer considered readable to a 'foreign os' (not sure of that part though)... so, yeah... ...no point to using dynamic disks really other than for software raid though

software raid definetly isnt a good idea though, especially if you plan on preserving your data at all... it can be fun for use as a *very* temporary solution though (such as if you want to play games, or benchmark to see how fast it is)... problems such as, if the os somehow gets corrupted though, would cause you to lose your array, as theres no way to even have redundancy for your data really... ...so its about only useful for simulating a raid 0 array that you really wouldnt mind losing

most hardware based raid controllers that i know of support at least raid 0, 1, 10, and JBOD... or at least 0, 1, and JBOD
 

hcforde

Distinguished
Feb 9, 2006
313
0
18,790
OK Choirbass -- then 2-160's striped '0' would max me out also-even 1 would do it. I don't even have to go as high as the 320's. My greatest advantage would be access times improving. Is there any way to figure out what access times would be with 3, 4, or 5 drives even on a theoretical basis?

I want a dream Motherboard with the following minimums
2 PCI-express slots running at x16
1 PCI-express slot running at x8
1 PCI-X slot
2 PCI slots
1 oh maybe another PCI-X

And I think I want it to run an Intel Core2 Duo

Doesn't the PCI-Express bus run on lanes, and the newer boards have 40 lanes at 250MB/sec per lane=10 gigs bidirectionally. I know this applies to video cards and assume the same applies to any other data traveling on the bus. I believe that the new ATI chipset for motherboards has 48 lanes for 3 fully functional PCI-express x16 slots

This would make possible the following(as far as PCI-Express goes) I do not know how the chipsets calculate PCI and PCI-X limitations.
2 PCI-express slots running at x16
2 PCI-express slot running at x8
2 PCI-X slot
1 PCI slots

I may be getting off topic on my own post but there seems to be some type of limitration on what can be done per chipset. I have looked at TYAN and SUPERMICRO workstation boards and they are heavy on PCI-X slots and light on the PCI-Express slots. Why hasn't anybody made a hybrid board for enthusiast that want to go full bore. Gigabyte has come the closest with their QUAD Royal that has 4 PCI-Express slots (x8 when all 4 are being used) and they can not make them fast enough. Everybody I call is out of stock. Hint to the manufacturers this Hybrid stuff will sell. I even saw a review of the Quad Royal early on saying it was a neat idea but they did not think the board had a market. Even Intel's 3-PCI-Express x16 board the 975xbx is hard to find.

Anyway I hopefully have 5-18Gig 320 Atlas 10K III's & 4-18Gig 160 Atlas 10K II coming my way. Maybe I can bench them and see what access times I can achieve.


Thanks for your replies
 

choirbass

Distinguished
Dec 14, 2005
1,586
0
19,780
OK Choirbass -- then 2-160's striped '0' would max me out also-even 1 would do it. I don't even have to go as high as the 320's. My greatest advantage would be access times improving. Is there any way to figure out what access times would be with 3, 4, or 5 drives even on a theoretical basis?

And I think I want it to run an Intel Core2 Duo

Doesn't the PCI-Express bus run on lanes, and the newer boards have 40 lanes at 250MB/sec per lane=10 gigs bidirectionally. I know this applies to video cards and assume the same applies to any other data traveling on the bus. I believe that the new ATI chipset for motherboards has 48 lanes for 3 fully functional PCI-express x16 slots

This would make possible the following(as far as PCI-Express goes) I do not know how the chipsets calculate PCI and PCI-X limitations.
2 PCI-express slots running at x16
2 PCI-express slot running at x8
2 PCI-X slot
1 PCI slots

a single 7200 RPM 160GB drive should come relatively close to a burst transfer rate that saturates the PCI bus, sustained tranfer rates however should be well below that though, closer to around half (~70MB/s or so)... which is where raid 0 comes in handy, raid 0 will boost your sustained transfer rates by a good margin...

for wanting faster access times however... the less hdds you have, the better (with a single hdd giving the fastest average access times)

for instance:
- a single 160GB hdd might give you an average access time of 10 ms
- 2 160GB hdds in raid 0 might then give you an average access time of maybe 12 ms
- 3 160GB hdds in raid 0 might then give you an average access time of maybe 15 ms

so the more hdds you have in a raid 0 array, the longer it will take for the files that need to be accessed to be found (because there are that many more hard drives to have to search through for what you need)... before the faster sustained transfer rates can begin to take place

as far as the differences between PCI-X and PCIe... PCI-X is primarily only used in servers, thats usually only where youll find them anyhow, on server motherboards... ...but, with the superiority of PCIe (although still in its infancy comparatively)... is slowly phasing PCI-X out, probably very slowly i would imagine, lol... because of how firmly grounded the more mature PCI-X is in the server market... ...so, the abundance of PCI-X on consumer boards is significantly less

im not sure how individual chipsets handle PCIe limitations, but, each lane is actually just another name the PCIe #... but, youre right about bidirectional throughput (forgot lol)... so half the bottlenecking would then occur

im not sure about the additional lanes in a PCIe slot (though i could be wrong)... ...cuz technically that would also increase the PCI # too... so for 40 lanes... it would then be PCIe 40x... ...but, maybe they mean it as something else, other than lanes...

edit: lol... i see what you mean now, i missed that part... when they say 48 lanes... ...theyre combining 3 PCIe 16x slots... i see that now :oops: ... ...but not additional lanes per slot...

but...
each direction is capable of transferring upto 250MB/s
bidirectional is then capable of transferring upto 500MB/s combined
so...

1*133MB/s @ 8bits per byte
PCI 32bit = ~133MB/s (~1Gb/s)

2*250MB/s @ 10bits per byte
PCIe x1 (1 lane) = ~500 MB/s (~5Gb/s)
PCIe x4 (4 lanes) = ~2 GB/s (~20Gb/s)
PCIe x8 (8 lanes) = ~4 GB/s (~40Gb/s
PCIe x16 (16 lanes) = ~8 GB/s (~80Gb/s)

also... technically i guess this would make normal 32bit PCI actually 66bit, instead of 32bit (like its commonly thought)... ...cuz if you multiply 33(MHz)*8MB/s, you get 264MB/s... but the normal 32bit PCI bus isnt capable of tranferring at that... but divide 264 in half, and you get 132... which it is capable of... ...so, yeah... cuz if you wanted to only have a transfer rate of 133MB/s, it should only have a speed of 16MHz, not 33... sooo 32bit PCI technically is 66bit, but it can only transfer at 32bit (i know thats probably not how it would be figured out if you wanted to do that correctly, but, it loosely makes sense)... i know someone is probably going to feel inclined to correct me on that, lol