How I built a 2.8TB RAID storage array

G

Guest

Guest
Archived from groups: alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.storage,comp.arch.storage,comp.os.linux.hardware (More info?)

My 2.8TB RAID 5 array is finally up and running. Here I'll discuss my
initial intended specifications, what I actually ended up with, and
associated commentary. Please see
<URL:http://groups.google.ca/groups?selm=slrnch28at.j0n.ylee%40pobox.com>
and
<URL:http://groups.google.ca/groups?selm=slrncu34ip.55k.ylee%40pobox.com>
for background material.

STORAGE MEDIUM
Initial: Eight 250GB SATA drives.
Actual: Nine 400GB PATA drives; eight for use, one as a cold spare.
Why: Found a stupendous sale at CompUSA Christmas week;
just-released-in-November Seagate Barracuda 7200.8 400GB PATA drives
at $230 each, with no quantity limitation . I'd have loved to have
gone with the SATA model, but given that Froogle lists the lowest
price for one at $350 (the PATA model retails at $250-350), it was an
easy choice.


CASE
Initial: Antec tower case.
Actual: Antec 4U rackmount case.
Why: I'd always thought of rackmounts as unsuitable for anyone with an
actual rack sitting in their data center, but after realizing that a
rackmount case is simply a tower case sitting on its size, it was an
easy decision given the space advantages. The Antec case here comes
with Antec's True Power 550W EPS12V power supply, and both have great
reputations. In practice, I found that the Antec case was remarkably
easy to open up (one thumbscrew), work with (all drive cages are
removable), and roomy.


MOTHERBOARD
Initial: Unspecified, but probably something Athlon-based and cheap.
Actual: Gigabyte X5DAL-G Intel server motherboard
Why: I became convinced that the sheer volume of the PCI traffic
generated by my proposed array under software RAID would overwhelm any
non-server motherboard, resulting in errors. In addition, I wanted
PCI-X slots for optimal performance. Even though I think AMD in
general offers much better bang for the buck, since I didn't want to
spend the $$$ for Opteron, a Xeon motherboard with an Intel server
chipset was the best comprimise.


CONTROLLER CARDS
Initial: Two Highpoint RocketRAID 454 cards.
Actual: Two 3Ware 7506-4LP cards.
Why: I needed PATA cards to go with my PATA drives, and also wanted to
put the two PCI-X slots on my motherboard to use. I found exactly two
PATA PCI-X controller cards: The 3Ware, and the Acard AEC-6897. Given
that the Acard's Linux driver compatibility looked really, really
iffy, I went with the 3Ware. I briefly considered the 7506-8 model,
which would've saved me about $120, but figured I'd be better off
distributing the bandwidth over two PCI-X slots rather than one.


SOFTWARE
Initial: Linux software RAID 5 and XFS or JFS.
Actual: Linux software RAID 5 and JFS.
Why: Initially I planned on software RAID knowing that the Highpoint
(and the equivalent Promise and Adaptec cards) didn't do true hardware
RAID. Even after switching over to 3Ware (which *does* do true
hardware RAID), everything I saw and read convinced me that software
RAID was still the way to go for performance, long-term compatibility,
and even 400GB extra space (given I'd be building one large RAID 5
array instead of two smaller ones).

I saw *lots* of conflicting benchmarks on whether XFS or JFS was the
way to go. Ultimately
<URL:http://pcbunn.cacr.caltech.edu/gae/3ware_raid_tests.htm> pushed
me toward JFS, but I suspect I could have gone XFS with no difficulty
whatsoever.


COST
As implied above, I paid $2070 plus sales tax for the drives. I lucked
out and found a terrific eBay deal for a prebuilt system containing
the above-mentioned case and motherboard, two Xeon 2.8GHz CPUs, a DVD
drive, and 2GB memory for $1260 including shipping labor aside, I'd
have paid *much* more to build an equivalent system myself. The 3Ware
cards were $240 each, no shipping or tax, from Monarch Computer. With
miscellaneous costs (such as a Cooler Master 4-in-3 drive cage and an
80GB boot drive from Best Buy for $40 after rebates), I paid under
$4100, tax and shipping included, for everything. At $1.46/GB *plus* a
powerful dual-CPU system, boatloads of memory, and a spare drive, I am
quite satisfied with the overall bang for the buck.


ASSEMBLY: HARDWARE
I spent most of the assembly time on the physical assembly part; it's
astonishing just how long the simple tasks of opening up each
retail-boxed drive, screwing the drive into the drive cage, putting
the cage into the case, removing the cage and the drive when you
realize you've put the drive in with the wrong mounting holes,
reinstalling the drive and cage, etc., etc. take! My studio apartment
still looks like a computer store exploded inside it.

3Ware wisely provides PATA master-only cables with its cards, which
saved some room, but my formerly-roomy case nonetheless looks like the
rat's nest to end all rat's nests inside.


ASSEMBLY: SOFTWARE
I'd gone ahead and installed Fedora Core 3 with the boot drive only
before the controller cards arrived. The 3Ware cards present each
PATA drive as a SCSI device (/dev/sd[a-h]). Once booted, I used mdadm
to create the RAID array (no partitions; just whole drives). While the
array chugged along to create the parity information (about four
hours), I then created one large LVM2 volume group and logical volume
on top of the array, then created one large JFS file system.

By the way, I found a RAID-related bug with Fedora Core's bootscripts;
see <URL:https://bugzilla.redhat.com/beta/show_bug.cgi?id=129633>).


RESULTS
'df -h':
/dev/mapper/VolGroup01-LogVol00
2.6T 221G 2.4T 9% /mnt/newspace


'mdadm --detail /dev/md0':
Version : 00.90.01
Creation Time : Wed Feb 16 01:53:33 2005
Raid Level : raid5
Array Size : 2734979072 (2608.28 GiB 2800.62 GB)
Device Size : 390711296 (372.61 GiB 400.09 GB)
Raid Devices : 8
Total Devices : 8
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Sat Feb 19 16:26:34 2005
State : clean
Active Devices : 8
Working Devices : 8
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 512K

Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 16 1 active sync /dev/sdb
2 8 32 2 active sync /dev/sdc
3 8 48 3 active sync /dev/sdd
4 8 64 4 active sync /dev/sde
5 8 80 5 active sync /dev/sdf
6 8 96 6 active sync /dev/sdg
7 8 112 7 active sync /dev/sdh
Events : 0.319006


'bonnie++ -s 4G -m 3ware-swraid5-type -p 3 ; \
bonnie++ -s 4G -m 3ware-swraid5-type-c1 -y & \
bonnie++ -s 4G -m 3ware-swraid5-type-c2 -y & \
bonnie++ -s 4G -m 3ware-swraid5-type-c3 -y &'
(To be honest these results are just a bunch of numbers to me, so any
interpretations of them are welcome. I should mention that these were
done with three distributed computing [BOINC, mprime, and
Folding@Home] projects running in the background. Although 'nice -n
19' each, they surely impacted CPU and perhaps disk performance
somewhat.)

Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
3ware-swraid5-ty 4G 15749 50 15897 8 7791 6 10431 49 20245 11 138.1 2
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 381 6 +++++ +++ 208 3 165 7 +++++ +++ 192 4
3ware-swraid5-type-c1,4G,15749,50,15897,8,7791,6,10431,49,20245,11,138.1,2,16,381,6,+++++,+++,208,3,165,7,+++++,+++,192,4
done.
Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
3ware-swraid5-ty 4G 13739 46 17265 9 7930 6 10569 50 20196 11 146.7 2
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 383 7 +++++ +++ 207 3 162 7 +++++ +++ 191 4
3ware-swraid5-type-c2,4G,13739,46,17265,9,7930,6,10569,50,20196,11,146.7,2,16,383,7,+++++,+++,207,3,162,7,+++++,+++,191,4
done.
Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
3ware-swraid5-ty 4G 13288 43 16143 8 7863 6 10695 50 20231 12 149.6 2
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 537 9 +++++ +++ 207 3 161 7 +++++ +++ 188 4
3ware-swraid5-type-c3,4G,13288,43,16143,8,7863,6,10695,50,20231,12,149.6,2,16,537,9,+++++,+++,207,3,161,7,+++++,+++,188,4


FINAL NOTES, THOUGHTS, AND QUESTIONS
I've noticed that over sync NFS, initiating a file copy from my older
Athlon 1.4GHz system to the RAID array system is *much, much, much*
(seconds as opposed to many minutes)slower than if I initiate the copy
in the same direction but from the array system. Why is this?

I almost went with the SATA (8506) version of the 3Ware cards and a
bunch of PATA-SATA adapters in order to maintain compatibility with
future drives, likely to be SATA only. However, a colleague pointed
out the foolishness of paying $200 extra ($120 for eight adapters plus
$80 for the extra cost of the SATA cards) in order to (possibly)
futureproof a $480 investment.

I was concerned that the drives (and the PATA cables) would cause
horrible heat and noise issues. These, surprisingly, didn't occur;
according to 'sensors', internal temperatures only rose by a few
degrees, and the server is just as (very) noisy now as pre-RAID
drives. I think I'l be able to get away with stuffing the array inside
my hall closet after all.

The server, before I put the cards and RAID drives into the system but
with the distributed-computing projects putting the CPU at 100%
utilization, took the power output on my Best Fortress 750VA/450W UPS
from about 55% to about 76%. With the RAID up and running and again
with 100% CPU utilization, output is 87-101% with the median at
perhaps 93%. I realize I really ought to invest in another UPS, but
with these figures I'm tempted to get by on what I currently have.

Yes, I could've saved a considerable amount of money had I gone with,
say, a used dual PIII server system with regular PCI slots (and, thus,
$80 Highpoint RAID cards, again for the four PATA channels and not for
their RAID functionality per se) and 512MB. And I suspect that for a
home user like me performance wouldn't have been too much less. But I
like to buy and build systems I can use for years and years without
having to bother with upgrading, and figure I've made a long-term (at
least 4-5 years, which is long term in the computer world) investment
that provides me with much more than just storage functionality. And
again, $1.46/GB is hard to beat.

--
Read my Deep Thoughts @ <URL:http://www.ylee.org/blog/> PERTH ----> *
Cpu(s): 6.7% us, 3.7% sy, 0.4% ni, 75.4% id, 12.3% wa, 1.4% hi, 0.0% si
Mem: 515800k total, 511628k used, 4172k free, 5812k buffers
Swap: 2101032k total, 13152k used, 2087880k free, 163928k cache
 
G

Guest

Guest
Archived from groups: alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.storage,comp.arch.storage,comp.os.linux.hardware (More info?)

Wow,Congrats for your sucessfull build!
I am on the Way to build a storage Array myself.Thinking of an
1U-Server with 3 x SoftwareRaid5 250Gig Disks and Fedora too.
Although it might be enought for now,i had the chance to expand it in
the future and save some money yet.
 
G

Guest

Guest
Archived from groups: alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.storage,comp.arch.storage,comp.os.linux.hardware (More info?)

What kind of cables did 3ware provide, regular flat ribbon or round cables?
If round cables, can you tell if they are just ribbons rolled up?

I had a bunch of questions but I read your post again and pretty much
everything was answered. Maybe even the cable question but I didn't see it.

While everything is still fresh in your mind, make sure you label the drives
so you are absolutely sure which drive is which. When I had a drive failure
with my measly 500GB raid 5 array, it was a big concern of mine when I
pulled a drive and replaced it. Not knowing EXACTLY what would happen
should I pull the wrong drive and replace it. I can only imagine my
sweating on which of the 8 drives to replace! Like they say, measure twice,
cut once!

For me, choosing between 2 hardware arrays or 1 software array would have
been a big decision, the decision of all decisions. When did you finally
make the decision? Was the machine already assembled before you really knew
which way you would go?

Isn't current tech/$ great? A guy can do some really, really cool stuff
with a reasonable budget. I mean $4100 is a lot of money, but what you have
is amazing.

Great project by the way.

--Dan


"Yeechang Lee" <ylee@pobox.com> wrote in message
news:slrnd1g04a.5mt.ylee@pobox.com...
> My 2.8TB RAID 5 array is finally up and running. Here I'll discuss my
> initial intended specifications, what I actually ended up with, and
> associated commentary. Please see
> CONTROLLER CARDS
> Initial: Two Highpoint RocketRAID 454 cards.
> Actual: Two 3Ware 7506-4LP cards.
> Why: I needed PATA cards to go with my PATA drives, and also wanted to
> put the two PCI-X slots on my motherboard to use. I found exactly two
> PATA PCI-X controller cards: The 3Ware, and the Acard AEC-6897. Given
> that the Acard's Linux driver compatibility looked really, really
> iffy, I went with the 3Ware. I briefly considered the 7506-8 model,
> which would've saved me about $120, but figured I'd be better off
> distributing the bandwidth over two PCI-X slots rather than one.
>
>
> SOFTWARE
> Initial: Linux software RAID 5 and XFS or JFS.
> Actual: Linux software RAID 5 and JFS.
> Why: Initially I planned on software RAID knowing that the Highpoint
> (and the equivalent Promise and Adaptec cards) didn't do true hardware
> RAID. Even after switching over to 3Ware (which *does* do true
> hardware RAID), everything I saw and read convinced me that software
> RAID was still the way to go for performance, long-term compatibility,
> and even 400GB extra space (given I'd be building one large RAID 5
> array instead of two smaller ones).
>
> I saw *lots* of conflicting benchmarks on whether XFS or JFS was the
> way to go. Ultimately
> <URL:http://pcbunn.cacr.caltech.edu/gae/3ware_raid_tests.htm> pushed
> me toward JFS, but I suspect I could have gone XFS with no difficulty
> whatsoever.
>
>
> COST
> As implied above, I paid $2070 plus sales tax for the drives. I lucked
> out and found a terrific eBay deal for a prebuilt system containing
> the above-mentioned case and motherboard, two Xeon 2.8GHz CPUs, a DVD
> drive, and 2GB memory for $1260 including shipping labor aside, I'd
> have paid *much* more to build an equivalent system myself. The 3Ware
> cards were $240 each, no shipping or tax, from Monarch Computer. With
> miscellaneous costs (such as a Cooler Master 4-in-3 drive cage and an
> 80GB boot drive from Best Buy for $40 after rebates), I paid under
> $4100, tax and shipping included, for everything. At $1.46/GB *plus* a
> powerful dual-CPU system, boatloads of memory, and a spare drive, I am
> quite satisfied with the overall bang for the buck.
>
>
> ASSEMBLY: HARDWARE
> I spent most of the assembly time on the physical assembly part; it's
> astonishing just how long the simple tasks of opening up each
> retail-boxed drive, screwing the drive into the drive cage, putting
> the cage into the case, removing the cage and the drive when you
> realize you've put the drive in with the wrong mounting holes,
> reinstalling the drive and cage, etc., etc. take! My studio apartment
> still looks like a computer store exploded inside it.
>
> 3Ware wisely provides PATA master-only cables with its cards, which
> saved some room, but my formerly-roomy case nonetheless looks like the
> rat's nest to end all rat's nests inside.
>
>
> ASSEMBLY: SOFTWARE
> I'd gone ahead and installed Fedora Core 3 with the boot drive only
> before the controller cards arrived. The 3Ware cards present each
> PATA drive as a SCSI device (/dev/sd[a-h]). Once booted, I used mdadm
> to create the RAID array (no partitions; just whole drives). While the
> array chugged along to create the parity information (about four
> hours), I then created one large LVM2 volume group and logical volume
> on top of the array, then created one large JFS file system.
>
> By the way, I found a RAID-related bug with Fedora Core's bootscripts;
> see <URL:https://bugzilla.redhat.com/beta/show_bug.cgi?id=129633>).
>
>
> RESULTS
> 'df -h':
> /dev/mapper/VolGroup01-LogVol00
> 2.6T 221G 2.4T 9% /mnt/newspace
>
>
> 'mdadm --detail /dev/md0':
> Version : 00.90.01
> Creation Time : Wed Feb 16 01:53:33 2005
> Raid Level : raid5
> Array Size : 2734979072 (2608.28 GiB 2800.62 GB)
> Device Size : 390711296 (372.61 GiB 400.09 GB)
> Raid Devices : 8
> Total Devices : 8
> Preferred Minor : 0
> Persistence : Superblock is persistent
>
> Update Time : Sat Feb 19 16:26:34 2005
> State : clean
> Active Devices : 8
> Working Devices : 8
> Failed Devices : 0
> Spare Devices : 0
>
> Layout : left-symmetric
> Chunk Size : 512K
>
> Number Major Minor RaidDevice State
> 0 8 0 0 active sync /dev/sda
> 1 8 16 1 active sync /dev/sdb
> 2 8 32 2 active sync /dev/sdc
> 3 8 48 3 active sync /dev/sdd
> 4 8 64 4 active sync /dev/sde
> 5 8 80 5 active sync /dev/sdf
> 6 8 96 6 active sync /dev/sdg
> 7 8 112 7 active sync /dev/sdh
> Events : 0.319006
>
>
> 'bonnie++ -s 4G -m 3ware-swraid5-type -p 3 ; \
> bonnie++ -s 4G -m 3ware-swraid5-type-c1 -y & \
> bonnie++ -s 4G -m 3ware-swraid5-type-c2 -y & \
> bonnie++ -s 4G -m 3ware-swraid5-type-c3 -y &'
> (To be honest these results are just a bunch of numbers to me, so any
> interpretations of them are welcome. I should mention that these were
> done with three distributed computing [BOINC, mprime, and
> Folding@Home] projects running in the background. Although 'nice -n
> 19' each, they surely impacted CPU and perhaps disk performance
> somewhat.)
>
> Version 1.03 ------Sequential Output------ --Sequential
Input- --Random-
> -Per Chr- --Block-- -Rewrite- -Per
Chr- --Block-- --Seeks--
> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
> 3ware-swraid5-ty 4G 15749 50 15897 8 7791 6 10431 49 20245 11
138.1 2
> ------Sequential Create------ --------Random
Create--------
> -Create-- --Read--- -Delete-- -Create-- --Read--- -Del
ete--
> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
/sec %CP
> 16 381 6 +++++ +++ 208 3 165 7 +++++ +++
192 4
>
3ware-swraid5-type-c1,4G,15749,50,15897,8,7791,6,10431,49,20245,11,138.1,2,1
6,381,6,+++++,+++,208,3,165,7,+++++,+++,192,4
> done.
> Version 1.03 ------Sequential Output------ --Sequential
Input- --Random-
> -Per Chr- --Block-- -Rewrite- -Per
Chr- --Block-- --Seeks--
> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
> 3ware-swraid5-ty 4G 13739 46 17265 9 7930 6 10569 50 20196 11
146.7 2
> ------Sequential Create------ --------Random
Create--------
> -Create-- --Read--- -Delete-- -Create-- --Read--- -Del
ete--
> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
/sec %CP
> 16 383 7 +++++ +++ 207 3 162 7 +++++ +++
191 4
>
3ware-swraid5-type-c2,4G,13739,46,17265,9,7930,6,10569,50,20196,11,146.7,2,1
6,383,7,+++++,+++,207,3,162,7,+++++,+++,191,4
> done.
> Version 1.03 ------Sequential Output------ --Sequential
Input- --Random-
> -Per Chr- --Block-- -Rewrite- -Per
Chr- --Block-- --Seeks--
> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
> 3ware-swraid5-ty 4G 13288 43 16143 8 7863 6 10695 50 20231 12
149.6 2
> ------Sequential Create------ --------Random
Create--------
> -Create-- --Read--- -Delete-- -Create-- --Read--- -Del
ete--
> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
/sec %CP
> 16 537 9 +++++ +++ 207 3 161 7 +++++ +++
188 4
>
3ware-swraid5-type-c3,4G,13288,43,16143,8,7863,6,10695,50,20231,12,149.6,2,1
6,537,9,+++++,+++,207,3,161,7,+++++,+++,188,4
>
>
> FINAL NOTES, THOUGHTS, AND QUESTIONS
> I've noticed that over sync NFS, initiating a file copy from my older
> Athlon 1.4GHz system to the RAID array system is *much, much, much*
> (seconds as opposed to many minutes)slower than if I initiate the copy
> in the same direction but from the array system. Why is this?
>
> I almost went with the SATA (8506) version of the 3Ware cards and a
> bunch of PATA-SATA adapters in order to maintain compatibility with
> future drives, likely to be SATA only. However, a colleague pointed
> out the foolishness of paying $200 extra ($120 for eight adapters plus
> $80 for the extra cost of the SATA cards) in order to (possibly)
> futureproof a $480 investment.
>
> I was concerned that the drives (and the PATA cables) would cause
> horrible heat and noise issues. These, surprisingly, didn't occur;
> according to 'sensors', internal temperatures only rose by a few
> degrees, and the server is just as (very) noisy now as pre-RAID
> drives. I think I'l be able to get away with stuffing the array inside
> my hall closet after all.
>
> The server, before I put the cards and RAID drives into the system but
> with the distributed-computing projects putting the CPU at 100%
> utilization, took the power output on my Best Fortress 750VA/450W UPS
> from about 55% to about 76%. With the RAID up and running and again
> with 100% CPU utilization, output is 87-101% with the median at
> perhaps 93%. I realize I really ought to invest in another UPS, but
> with these figures I'm tempted to get by on what I currently have.
>
> Yes, I could've saved a considerable amount of money had I gone with,
> say, a used dual PIII server system with regular PCI slots (and, thus,
> $80 Highpoint RAID cards, again for the four PATA channels and not for
> their RAID functionality per se) and 512MB. And I suspect that for a
> home user like me performance wouldn't have been too much less. But I
> like to buy and build systems I can use for years and years without
> having to bother with upgrading, and figure I've made a long-term (at
> least 4-5 years, which is long term in the computer world) investment
> that provides me with much more than just storage functionality. And
> again, $1.46/GB is hard to beat.
>
> --
> Read my Deep Thoughts @ <URL:http://www.ylee.org/blog/> PERTH ----> *
> Cpu(s): 6.7% us, 3.7% sy, 0.4% ni, 75.4% id, 12.3% wa, 1.4% hi, 0.0%
si
> Mem: 515800k total, 511628k used, 4172k free, 5812k buffers
> Swap: 2101032k total, 13152k used, 2087880k free, 163928k cache
 
G

Guest

Guest
Archived from groups: alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.storage,comp.arch.storage,comp.os.linux.hardware (More info?)

dg wrote:
> What kind of cables did 3ware provide, regular flat ribbon or round
> cables?

Flat. The only thing special about them was that they lacked slave
connectors.

I'm glad they're flat; despite the (lack of) air flow, at some point I
intend to try the fabled PATA cable origami methods I've heard about.

> While everything is still fresh in your mind, make sure you label
> the drives so you are absolutely sure which drive is which.

This does concern me. How the heck do I tell them apart, even now? How
di I figure out which drive is sda, which is sdb, which is sdc, etc.,
etc.? Advice is appreciated.

> For me, choosing between 2 hardware arrays or 1 software array would
> have been a big decision, the decision of all decisions.

Not me; all my research told me that software was the way to go for
both performance and downward-compatibility reasons.

> Great project by the way.

Thank you. It's still amazes me to see that little '2.6T' label appear
in the 'df -h' output.

--
Read my Deep Thoughts @ <URL:http://www.ylee.org/blog/> PERTH ----> *
Cpu(s): 6.7% us, 3.6% sy, 0.4% ni, 75.7% id, 12.1% wa, 1.4% hi, 0.0% si
Mem: 515800k total, 511540k used, 4260k free, 6088k buffers
Swap: 2101032k total, 13096k used, 2087936k free, 161880k cached
 
G

Guest

Guest
Archived from groups: alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.storage,comp.arch.storage,comp.os.linux.hardware (More info?)

Yeechang Lee <ylee@pobox.com> writes:
>dg wrote:
>> While everything is still fresh in your mind, make sure you label
>> the drives so you are absolutely sure which drive is which.
>
>This does concern me. How the heck do I tell them apart, even now? How
>di I figure out which drive is sda, which is sdb, which is sdc, etc.,
>etc.? Advice is appreciated.

One way is to disconnect them one by one, and see which drive is
missing in the list (unless you want to test the md driver's
reconstruction abilities, you should be doing this with a kernel that
does not have an md driver, probably booting from CD). You can also
use that method when a drive fails (but then its even more important
that the kernel does not have an md driver).

Another way is to just look which ports on the cards connect with
which drives. They are typically marked on the card and/or in the
manual with IDE0, IDE1, etc. You also have to find out which card is
which. There may be a method to do this through the PCI IDs, but I
would go for the disconnection method for that.

Followups set to comp.os.linux.hardware (because I read that, csiphs
would probably be more appropriate).

- anton
--
M. Anton Ertl Some things have to be seen to be believed
anton@mips.complang.tuwien.ac.at Most things have to be believed to be seen
http://www.complang.tuwien.ac.at/anton/home.html
 
G

Guest

Guest
Archived from groups: alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.storage,comp.arch.storage,comp.os.linux.hardware (More info?)

Dorothy Bradbury wrote:

-Fans and Noise from them

I could live with it.I will place it somewhere,where the Noise doesnt
matter and the Output will be redirected with VNCServer to my
Workstations.

-Power Supply within 1U Servers

If i choose 8 Disks,i surely will get a 550Watt Power Supply.But with
3-4 Disks,i could live with the stock PS.After a Year i will upgrade
it,because it could be failing (saw some very nice Offers for used
1U-Servers).
 
G

Guest

Guest
Archived from groups: alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.storage,comp.arch.storage,comp.os.linux.hardware (More info?)

"Yeechang Lee" <ylee@pobox.com> wrote in message
news:slrnd1g04a.5mt.ylee@pobox.com...
>
> CONTROLLER CARDS
> Initial: Two Highpoint RocketRAID 454 cards.
> Actual: Two 3Ware 7506-4LP cards.
> Why: I needed PATA cards to go with my PATA drives, and also wanted to
> put the two PCI-X slots on my motherboard to use. I found exactly two
> PATA PCI-X controller cards: The 3Ware, and the Acard AEC-6897. Given
> that the Acard's Linux driver compatibility looked really, really
> iffy, I went with the 3Ware. I briefly considered the 7506-8 model,
> which would've saved me about $120, but figured I'd be better off
> distributing the bandwidth over two PCI-X slots rather than one.
>
No, one PCI-X card would be just as good.

You don't mention the ethernet card, which could also be PCI-X.
>
> SOFTWARE
> Initial: Linux software RAID 5 and XFS or JFS.
> Actual: Linux software RAID 5 and JFS.
> Why: Initially I planned on software RAID knowing that the Highpoint
> (and the equivalent Promise and Adaptec cards) didn't do true hardware
> RAID. Even after switching over to 3Ware (which *does* do true
> hardware RAID), everything I saw and read convinced me that software
> RAID was still the way to go for performance, long-term compatibility,
> and even 400GB extra space (given I'd be building one large RAID 5
> array instead of two smaller ones).
>
Is there a comparison of Linux RAID 5 to top-end RAID cards? I suspect 3Ware is
better.
 
G

Guest

Guest
Archived from groups: alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.storage,comp.arch.storage,comp.os.linux.hardware (More info?)

Dorothy Bradbury wrote:
> Watch PSU:
> o To the original poster & any multi-GB system, PSU matters
> ---- not just re s/w failure, but h/w failure
> ---- very rare, but this IS an area where over-capacity is an idea

PSU concerns are why I went with an Antec 550W supply as opposed to
some 300-400W noname brand. Since my rackmount case does not have room
for a redundant supply, I suspect this is the best I can do. As you
say, PSU problems are relatively rare.

That said, anyone know how I can dynamically measure the actual
wattage used by my system, beyond just adding up each individual
component's wattage?

--
Read my Deep Thoughts @ <URL:http://www.ylee.org/blog/> PERTH ----> *
Cpu(s): 6.9% us, 3.5% sy, 0.8% ni, 75.8% id, 11.7% wa, 1.3% hi, 0.0% si
Mem: 515800k total, 399300k used, 116500k free, 3980k buffers
Swap: 2101032k total, 13360k used, 2087672k free, 47212k cached
 
G

Guest

Guest
Archived from groups: alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.storage,comp.arch.storage,comp.os.linux.hardware (More info?)

In article <slrnd1hdsp.5mt.ylee@pobox.com>,
Yeechang Lee <ylee@pobox.com> wrote:
>Dorothy Bradbury wrote:
>> Watch PSU:
>> o To the original poster & any multi-GB system, PSU matters
>> ---- not just re s/w failure, but h/w failure
>> ---- very rare, but this IS an area where over-capacity is an idea
>
>PSU concerns are why I went with an Antec 550W supply as opposed to
>some 300-400W noname brand. Since my rackmount case does not have room
>for a redundant supply, I suspect this is the best I can do. As you
>say, PSU problems are relatively rare.
>
>That said, anyone know how I can dynamically measure the actual
>wattage used by my system, beyond just adding up each individual
>component's wattage?
>
>--
>Read my Deep Thoughts @ <URL:http://www.ylee.org/blog/> PERTH ----> *
>Cpu(s): 6.9% us, 3.5% sy, 0.8% ni, 75.8% id, 11.7% wa, 1.3% hi, 0.0% si
>Mem: 515800k total, 399300k used, 116500k free, 3980k buffers
>Swap: 2101032k total, 13360k used, 2087672k free, 47212k cached


http://www.ahernstore.com/p4400.html about $30. I've got one.

--

a d y k e s @ p a n i x . c o m

Don't blame me. I voted for Gore.
 
G

Guest

Guest
Archived from groups: alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.storage,comp.arch.storage,comp.os.linux.hardware (More info?)

In article <cvad98$q9q$1@panix5.panix.com>, adykes@panix.com (Al Dykes)
wrote:

> In article <slrnd1hdsp.5mt.ylee@pobox.com>,
> Yeechang Lee <ylee@pobox.com> wrote:
> >
> >That said, anyone know how I can dynamically measure the actual
> >wattage used by my system, beyond just adding up each individual
> >component's wattage?
> >
>
> http://www.ahernstore.com/p4400.html about $30. I've got one.


Another option is the Watts-Up meter, which I've been using for a few
years and it's been very solid and reliable. But I don't know if it's
any better than the Kill-A-Watt however, at 25% the price.

There's a new Watts-Up Pro that has a nifty-looking PC (Windows)
interface: http://www.nooutage.com/wattsup-pro.htm ... So geekorific, I
might have to get one.

--
Forward and fiaka! Manacle an den gosaka!
 
G

Guest

Guest
Archived from groups: alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.storage,comp.arch.storage,comp.os.linux.hardware (More info?)

Eric Gisin wrote:
> "Yeechang Lee" <ylee@pobox.com> wrote in message
> news:slrnd1g04a.5mt.ylee@pobox.com...
>
>>CONTROLLER CARDS
>>Initial: Two Highpoint RocketRAID 454 cards.
>>Actual: Two 3Ware 7506-4LP cards.
>>Why: I needed PATA cards to go with my PATA drives, and also wanted to
>>put the two PCI-X slots on my motherboard to use. I found exactly two
>>PATA PCI-X controller cards: The 3Ware, and the Acard AEC-6897. Given
>>that the Acard's Linux driver compatibility looked really, really
>>iffy, I went with the 3Ware. I briefly considered the 7506-8 model,
>>which would've saved me about $120, but figured I'd be better off
>>distributing the bandwidth over two PCI-X slots rather than one.
>>
>
> No, one PCI-X card would be just as good.

Not necessarily. PCI (and PCI-X) bandwidth is per bus, not per slot.
So if those two cards are in two slots on one PCI-X bus, that's not
distributing the bandwidth at all. The motherboard may offer multiple
PCI-X busses, in which case the OP may want to ensure the cards are in
slots that correspond to different busses. The built-in NIC on most
motherboards (along with most other built-in devices) are also on one
(or more) of the PCI busses, so consider bandwidth used by those as well
when distributing the load.
 
G

Guest

Guest
Archived from groups: alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.storage,comp.arch.storage,comp.os.linux.hardware (More info?)

I need to stay away from this thread for a while, I am starting to feel some
inspiration. It has been some time since I have run Linux, and well, to be
honest I have always had an urge to build a functional linux box for myself.
And raid fascinates me, so, well, I need to stop reading this stuff. I
can't afford a new toy now.

--Dan

"Yeechang Lee" <ylee@pobox.com> wrote in message
news:slrnd1gcv7.5mt.ylee@pobox.com...
> > Great project by the way.
>
> Thank you. It's still amazes me to see that little '2.6T' label appear
> in the 'df -h' output.
 
G

Guest

Guest
Archived from groups: alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.storage,comp.arch.storage,comp.os.linux.hardware (More info?)

"Eric Gisin" <ericgisin@hotmail.com> wrote in message news:cvava201pbk@enews4.newsguy.com
> "Yeechang Lee" <ylee@pobox.com> wrote in message news:slrnd1g04a.5mt.ylee@pobox.com...
> >
> > CONTROLLER CARDS
> > Initial: Two Highpoint RocketRAID 454 cards.
> > Actual: Two 3Ware 7506-4LP cards.
> > Why: I needed PATA cards to go with my PATA drives, and also wanted to
> > put the two PCI-X slots on my motherboard to use. I found exactly two
> > PATA PCI-X controller cards: The 3Ware, and the Acard AEC-6897. Given
> > that the Acard's Linux driver compatibility looked really, really
> > iffy, I went with the 3Ware. I briefly considered the 7506-8 model,
> > which would've saved me about $120, but figured I'd be better off
> > distributing the bandwidth over two PCI-X slots rather than one.
> >
> No, one PCI-X card would be just as good.

Probably, yes.
Depends on what PCI-X (version, clock) and whether the slots are
seperate PCI buses or not.

If seperate buses the highest clock is atainable and they both have the
full PCI-X bandwidth, say 1GB/s (133MHz) or 533 MB/s (66MHz)
If on same bus, the clock is lower to start with and they have to share
that bus PCI-X bandwidth, say a still plenty 400MB/s each (100MHz)
but may become iffy in case of 66MHz clock (266MB/s) or even 50MHz.

>
> You don't mention the ethernet card, which could also be PCI-X.

What if?

> >
> > SOFTWARE
> > Initial: Linux software RAID 5 and XFS or JFS.
> > Actual: Linux software RAID 5 and JFS.
> > Why: Initially I planned on software RAID knowing that the Highpoint
> > (and the equivalent Promise and Adaptec cards) didn't do true hardware
> > RAID. Even after switching over to 3Ware (which *does* do true
> > hardware RAID), everything I saw and read convinced me that software
> > RAID was still the way to go for performance, long-term compatibility,
> > and even 400GB extra space (given I'd be building one large RAID 5
> > array instead of two smaller ones).
> >
> Is there a comparison of Linux RAID 5 to top-end RAID cards?
> I suspect 3Ware is better.
 
G

Guest

Guest
Archived from groups: alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.storage,comp.arch.storage,comp.os.linux.hardware (More info?)

John-Paul Stewart wrote:
> > No, one PCI-X card would be just as good.
>
> Not necessarily. PCI (and PCI-X) bandwidth is per bus, not per slot.

The Supermicro X5DAL-G motherboard does indeed offer a dedicated bus
to each PCI-X slot, thus my desire to spread out the load with two
cards. Otherwise I'd have gone with the 7506-8 eight-channel card
instead and saved about $120.

The built-in Gigabit Ethernet jack does indeed share one of the PCI-X
slots' buses, but I only have a 100Mbit router right now. I wonder
whether I should expect it to significantly contribute to overall
bandwidth usage on that bus, either now or if/when I upgrade to
Gigabit?

--
Read my Deep Thoughts @ <URL:http://www.ylee.org/blog/> PERTH ----> *
Cpu(s): 5.6% us, 5.4% sy, 0.2% ni, 73.9% id, 10.4% wa, 4.6% hi, 0.0% si
Mem: 515800k total, 511808k used, 3992k free, 1148k buffers
Swap: 2101032k total, 240k used, 2100792k free, 345344k cached
 
G

Guest

Guest
Archived from groups: alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.storage,comp.arch.storage,comp.os.linux.hardware (More info?)

> > > No, one PCI-X card would be just as good.
> >
> > Not necessarily. PCI (and PCI-X) bandwidth is per bus, not per slot.
>
> The Supermicro X5DAL-G motherboard does indeed offer a dedicated bus
> to each PCI-X slot, thus my desire to spread out the load with two
> cards. Otherwise I'd have gone with the 7506-8 eight-channel card
> instead and saved about $120.
>
> The built-in Gigabit Ethernet jack does indeed share one of the PCI-X
> slots' buses, but I only have a 100Mbit router right now. I wonder
> whether I should expect it to significantly contribute to overall
> bandwidth usage on that bus, either now or if/when I upgrade to
> Gigabit?

The numbers that you posted from Bonnie++ , if I followed them correctly,
showed max throughputs in the 20 MB/second range. That seems awfully slow
for this sort of setup.

As a comparison, I have two machines with software RAID 5 arrays, one a
2x866 P3 system with 5x120-gig drives, the other an A64 system with 8x300
gig drives, and both of them can read and write to/from their RAID 5 array
at 45+ MB/s, even with the controller cards plugged into a single 32/33 PCI
bus.

To answer your question, GigE at full speed is a bit more than 100
MB/sec. The PCI-X busses on that motherboard are both capable of at least
100 MHz operation, which at 64 bits would give you a max *realistic*
throughput of about 500 MB/second, so any performance detriment from using
the gigE would likely be completely insignificant.

I've got another machine with a 3Ware 7000-series card with a bunch of
120-gig drives on it (I haven't looked at the machine in quite a while), and
I was pretty disappointed with the performance from that controller. It
works for the intended usage (point-in-time snapshots), but responsiveness
of the machine under disk I/O is pathetic - even with dual Xeons.

steve
 
G

Guest

Guest
Archived from groups: alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.storage,comp.arch.storage,comp.os.linux.hardware (More info?)

"Yeechang Lee" <ylee@pobox.com> wrote in message
news:slrnd1in9v.73q.ylee@pobox.com...
> The built-in Gigabit Ethernet jack does indeed share one of the PCI-X
> slots' buses, but I only have a 100Mbit router right now. I wonder
> whether I should expect it to significantly contribute to overall
> bandwidth usage on that bus, either now or if/when I upgrade to
> Gigabit?
>

When you DO go gigabit, be sure to at least do some basic throughput
benchmarks (even if its just with a stopwatch, but I suspect you will come
up with a good method) and then compare afterwards. That is really good
data to get firsthand from somebody with such an extreme array and well
documented hardware and software setup. Really good stuff! I wonder what
kind of data rates that array is capable of within the machine too.
Somewhere there is a guy claiming to get 90+MB per second over gigabit
ethernet using raid arrays on both ends.

Gigabit switches are getting so cheap its incredible.

--Dan
 
G

Guest

Guest
Archived from groups: alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.storage,comp.arch.storage,comp.os.linux.hardware (More info?)

Eric Gisin wrote:
> Is there a comparison of Linux RAID 5 to top-end RAID cards? I
> suspect 3Ware is better.

No, the consensus is that Linux software RAID 5 has the edge on even
3Ware (the consensus hardware RAID leader). See, among others,
<URL:http://www.chemistry.wustl.edu/~gelb/castle_raid.html> (which
does note that software striping two 3Ware hardware RAID 5 solutions
"might be competitive" with software) and
<URL:http://staff.chess.cornell.edu/~schuller/raid.html> (which states
that no, all-software still has the edge in such a scenario).

--
Read my Deep Thoughts @ <URL:http://www.ylee.org/blog/> PERTH ----> *
Cpu(s): 5.6% us, 5.6% sy, 0.3% ni, 72.2% id, 11.9% wa, 4.5% hi, 0.0% si
Mem: 515800k total, 512004k used, 3796k free, 37608k buffers
Swap: 2101032k total, 240k used, 2100792k free, 293748k cached
 
G

Guest

Guest
Archived from groups: alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.storage,comp.arch.storage,comp.os.linux.hardware (More info?)

In article <slrnd1inqs.73q.ylee@pobox.com>,
Yeechang Lee <ylee@pobox.com> wrote:
>Eric Gisin wrote:
>> Is there a comparison of Linux RAID 5 to top-end RAID cards? I
>> suspect 3Ware is better.
>
>No, the consensus is that Linux software RAID 5 has the edge on even
>3Ware (the consensus hardware RAID leader). See, among others,

If all you care about is "rod length check" long-sequential-read or
long-sequential-write performance, that's probably true. If, of
course, you restrict yourself to a single stream...

....of course, in the real world, people actually do short writes and
multi-stream large access every once in a while. Software RAID is
particularly bad at the former because it can't safely gather writes
without NVRAM. Of course, both software implementations *and* typical
cheap PCI RAID card (e.g. 3ware 7/8xxx) implementations are pretty
awful at the latter, too, and for no good reason that I could ever see.

--
Thor Lancelot Simon tls@rek.tjls.com

"The inconsistency is startling, though admittedly, if consistency is to be
abandoned or transcended, there is no problem." - Noam Chomsky
 
G

Guest

Guest
Archived from groups: alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.storage,comp.arch.storage,comp.os.linux.hardware (More info?)

Steve Wolfe wrote:
> The numbers that you posted from Bonnie++ , if I followed them correctly,
> showed max throughputs in the 20 MB/second range. That seems
> awfully slow for this sort of setup.

Agreed. However, those benchmarks were done with no tuning whatsoever
(and, as noted, the three distributed computing projects going full
blast); since then I've done some minor tweaking, notably the noatime
mount option, which has helped. I'd post newer benchmarks but the
array's right now rebuilding itself due to a kernel panic I caused by
trying to use smartctl to talk to the bare drives without invoking the
special 3ware switch.

> To answer your question, GigE at full speed is a bit more than
> 100 MB/sec. The PCI-X busses on that motherboard are both capable
> of at least 100 MHz operation, which at 64 bits would give you a max
> *realistic* throughput of about 500 MB/second, so any performance
> detriment from using the gigE would likely be completely
> insignificant.

That was my sense as well; I suspect network saturation-by-disk will
only cease to be an issue when we all hit the 10GigE world.

(Actually, the 7506 cards are 66MHz PCI-X, so they don't take full
advantage of the theoretical bandwidth available on the slots,
anyway.)

> I've got another machine with a 3Ware 7000-series card with a bunch of
> 120-gig drives on it (I haven't looked at the machine in quite a
> while), and I was pretty disappointed with the performance from that
> controller.

Appreciate the report. Fortunately, as a home user performance (or
given that I'm only recording TV episodes, even data integrity
actually; thus no backup plans for the array, even if backing up 2.8TB
was practical in any way budgetwise) isn't my prime
consideration. Were I after that, I'd probably have gone with the
9000-series controllers and SATA drives, but my wallet's busted enough
with what I already have!

--
Read my Deep Thoughts @ <URL:http://www.ylee.org/blog/> PERTH ----> *
Cpu(s): 4.7% us, 3.2% sy, 0.3% ni, 75.7% id, 14.0% wa, 2.0% hi, 0.0% si
Mem: 515800k total, 510704k used, 5096k free, 18540k buffers
Swap: 2101032k total, 240k used, 2100792k free, 305484k cached
 
G

Guest

Guest
Archived from groups: alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.storage,comp.arch.storage,comp.os.linux.hardware (More info?)

Steve Wolfe wrote:
>
> The numbers that you posted from Bonnie++ , if I followed them correctly,
> showed max throughputs in the 20 MB/second range. That seems awfully slow
> for this sort of setup.

I noticed that, too, but then noticed that the OP seemed to be running
three copies of Bonnie++ in parallel. His command line was:

'bonnie++ -s 4G -m 3ware-swraid5-type -p 3 ; \
bonnie++ -s 4G -m 3ware-swraid5-type-c1 -y & \
bonnie++ -s 4G -m 3ware-swraid5-type-c2 -y & \
bonnie++ -s 4G -m 3ware-swraid5-type-c3 -y &'

I'm no expert, but if he's running three in parallel on the same
software RAID, I'd suspect that the total performance should be taken as
the *sum* of those three---or over 60 MB/sec.

> As a comparison, I have two machines with software RAID 5 arrays, one a
> 2x866 P3 system with 5x120-gig drives, the other an A64 system with 8x300
> gig drives, and both of them can read and write to/from their RAID 5 array
> at 45+ MB/s, even with the controller cards plugged into a single 32/33 PCI
> bus.

As another point of comparison: 5x73GB SCSI drives, software RAID-5,
one U160 SCSI channel, 32-bit/33-MHz bus, dual 1GHz P-III: writes at 36
MB/sec and read reads at 74 MB/sec.
 

peter

Distinguished
Mar 29, 2004
3,226
0
20,780
Archived from groups: alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.storage,comp.arch.storage,comp.os.linux.hardware (More info?)

> (Actually, the 7506 cards are 66MHz PCI-X, so they don't take full
> advantage of the theoretical bandwidth available on the slots,
> anyway.)
>
There is no 66MHz PCI-X.
3Ware 7506 cards are PCI 2.2 compliant 64-bit/66MHz bus master.
 
G

Guest

Guest
Archived from groups: alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.storage,comp.arch.storage,comp.os.linux.hardware (More info?)

> I noticed that, too, but then noticed that the OP seemed to be running
> three copies of Bonnie++ in parallel. His command line was:
>
> 'bonnie++ -s 4G -m 3ware-swraid5-type -p 3 ; \
> bonnie++ -s 4G -m 3ware-swraid5-type-c1 -y & \
> bonnie++ -s 4G -m 3ware-swraid5-type-c2 -y & \
> bonnie++ -s 4G -m 3ware-swraid5-type-c3 -y &'
>
> I'm no expert, but if he's running three in parallel on the same
> software RAID, I'd suspect that the total performance should be taken as
> the *sum* of those three---or over 60 MB/sec.

Good point- I missed that!

steve
 
G

Guest

Guest
Archived from groups: alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.storage,comp.arch.storage,comp.os.linux.hardware (More info?)

"Peter" <peterfoxghost@yahoo.ca> wrote in message news:37ugirF5jj2l9U1@individual.net
> > (Actually, the 7506 cards are 66MHz PCI-X, so they don't take full
> > advantage of the theoretical bandwidth available on the slots, anyway.)
> >
> There is no 66MHz PCI-X.

The PCI-SIG seem to think different. Perhaps you know better then?
And contrary to what you say elsewhere, they say there is no 100MHz
spec. That was added by the industry.

> 3Ware 7506 cards are PCI 2.2 compliant 64-bit/66MHz bus master.
 
G

Guest

Guest
Archived from groups: alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.storage,comp.arch.storage,comp.os.linux.hardware (More info?)

I wrote earlier:
> > While everything is still fresh in your mind, make sure you label
> > the drives so you are absolutely sure which drive is which.
>
> This does concern me. How the heck do I tell them apart, even now?
> How di I figure out which drive is sda, which is sdb, which is sdc,
> etc., etc.?

As it turns out, it proved straightforward to use either 'smartctl -a
--device=3ware,[0-3] /dev/twe[0-1]' or 3Ware's 3dm2 and tw_cli
(available on the Web site) tools to read the serial numbers of the
drives. So mystery solved.

--
Read my Deep Thoughts @ <URL:http://www.ylee.org/blog/> PERTH ----> *
Cpu(s): 6.9% us, 3.2% sy, 2.7% ni, 77.6% id, 8.3% wa, 1.3% hi, 0.0% si
Mem: 515800k total, 511768k used, 4032k free, 10648k buffers
Swap: 2101032k total, 240k used, 2100792k free, 263108k cached
 
G

Guest

Guest
Archived from groups: alt.comp.hardware.pc-homebuilt,comp.sys.ibm.pc.hardware.storage,comp.arch.storage,comp.os.linux.hardware (More info?)

Peter wrote:
> There is no 66MHz PCI-X.
> 3Ware 7506 cards are PCI 2.2 compliant 64-bit/66MHz bus master.

What's the difference? I thought 64-bit/66Mhz PCI *was* PCI-X.

--
Read my Deep Thoughts @ <URL:http://www.ylee.org/blog/> PERTH ----> *
Cpu(s): 6.9% us, 3.2% sy, 2.7% ni, 77.6% id, 8.3% wa, 1.3% hi, 0.0% si
Mem: 515800k total, 511048k used, 4752k free, 11788k buffers
Swap: 2101032k total, 240k used, 2100792k free, 261024k cached
 

TRENDING THREADS