SCSI vs SATA Hih-Perf

Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

Hello all,

Which of the two following architectures would you choose for a
high-perf NFS server in a cluster env. Most of our data ( 80% ) is
small ( < 64 kb ) files. Reads and Writes are similar and mostly random
in nature:

Architecture 1:
Tyan 2882
2xOpteron 246
4 GB RAM
2x80Gb SATA ( System )
2x12-Way 3Ware Cards
24 73 GB 10k rpm Western Digital Raptors
Software RAID 10 on Linux 2.6.x
XFS

Architecture 2:
Tyan 2881 with Dual U-320 SCSI
2xOpteron 246
4 GB RAM
2x80Gb SATA ( System )
12x146Gb Fujitsu 10k SCSI
Software RAID 10 on Linux
XFS

The price for both system is almost the same. Considerations:

- Number of Spindles: Solution 1 looks like it might have an edge here
for small sequential reads and writes since there are just twice as
many spindles.

- PCI Bus Saturation: Solution 1 also appears to have an edge in case
we use large sequential reads. Solution 2 would be limited by the Dual
SCSI bus bandwidth 640Gb. I doubt we would ever reach that level of
bandwidth in any random-read or random-write situation and in our small
random file scenario I think both system would perform equally. Any
comments ?

- MTBF: Solution 2 has a definite edge. Some numbers:

MTBF1= 1 / ( 24* 1/1.2million + 2/1million ) = 45454.54 hours

Raptor MTBF = 1,200,000 hours; 3Ware MTBF = 1,000,000 hours

MTBF2= 1 / ( 12* 1/1.2million ) = 100,000 hours

Not surprisingly Solution 2 is twice as reliabe. This doesn't take
into account the novelty of the SATA Raptor drive and the proven track
record of the SCSI solution. In any case comments on this MTBF point
are welcomed.

- RAID Performance: I am not sure about this. In principle both
solution should behave the same since we are using SW RAID but I don't
know how the fact that SCSI is a bus with overhead would affect RAID
performance ? What do you think ? Any ideas as to how to spread the
RAID 10 in a dual U 320 SCSI Scenario ?
SATA being Point-To-Point appears to have an edge again but your
thoghts are welcomed.

- Would I get a considerable edge if I used 15k SCSI Drives ? I am not
totally convinced that the SATA is our best choice. Any help is greatly
appreciated.

Many thanks,

Parsifal
43 answers Last reply
More about scsi sata perf
  1. Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

    > Hello all,
    >
    > Which of the two following architectures would you choose for a
    > high-perf NFS server in a cluster env. Most of our data ( 80% ) is
    > small ( < 64 kb ) files. Reads and Writes are similar and mostly random
    > in nature:
    >
    > Architecture 1:
    > Tyan 2882
    > 2xOpteron 246
    > 4 GB RAM
    > 2x80Gb SATA ( System )
    > 2x12-Way 3Ware Cards
    > 24 73 GB 10k rpm Western Digital Raptors
    > Software RAID 10 on Linux 2.6.x
    > XFS
    >
    > Architecture 2:
    > Tyan 2881 with Dual U-320 SCSI
    > 2xOpteron 246
    > 4 GB RAM
    > 2x80Gb SATA ( System )
    > 12x146Gb Fujitsu 10k SCSI
    > Software RAID 10 on Linux
    > XFS
    >
    > The price for both system is almost the same. Considerations:
    >
    > - Number of Spindles: Solution 1 looks like it might have an edge here
    > for small sequential reads and writes since there are just twice as
    > many spindles.

    Yes, but Raptors have 226 IO/s vs. Fujitsu 269 IO/s.

    > - PCI Bus Saturation: Solution 1 also appears to have an edge in case
    > we use large sequential reads. Solution 2 would be limited by the Dual
    > SCSI bus bandwidth 640Gb. I doubt we would ever reach that level of
    > bandwidth in any random-read or random-write situation and in our small
    > random file scenario I think both system would perform equally. Any
    > comments ?

    You are designing for NFS, right? Don't forget that network IO and
    SCSI IO are on the same PCI-X 64bit 100MHz bus. Therefore available
    throughput will be 800MB/s * 0.5 = 400MB/s

    In random operations, if you get 200 IO/s from each SCSI disk,
    you will have 12disks * 200 IO/s * 64KB = 154MB/s

    > - MTBF: Solution 2 has a definite edge. Some numbers:
    >
    > MTBF1= 1 / ( 24* 1/1.2million + 2/1million ) = 45454.54 hours
    >
    > Raptor MTBF = 1,200,000 hours; 3Ware MTBF = 1,000,000 hours
    >
    > MTBF2= 1 / ( 12* 1/1.2million ) = 100,000 hours

    How did you calculated your total MTBF???
    Your calcs maybe good for RAID0 but not for RAID10.

    Assuming 5 year period, for 1,200,000 hour MTBF disk
    reliabilty is about 0.964.

    For RAID10 (stripe of mirrored drives) in 6x2 configuration
    eqivalent MTBF will be 5,680,000 hours

    Assuming 5 year period, for 1,000,000 hour MTBF disk
    reliabilty is about 0.957.

    For RAID10 (stripe of mirrored drives) in 12x2 configuration
    eqivalent MTBF will be 2,000,000 hours

    For a single RAID1 of the 1,000,000 hr MTBF drives
    equivalent MTBF will be 23,800,000 hours

    BTW, 3Ware controllers are PCI 2.2 64bit 66MHz.
    I can't believe that their MTBF is so low (1,000,000 hr)
    I you loose one, probably your RAID will go down too.

    > Not surprisingly Solution 2 is twice as reliabe. This doesn't take
    > into account the novelty of the SATA Raptor drive and the proven track
    > record of the SCSI solution. In any case comments on this MTBF point
    > are welcomed.
    >
    > - RAID Performance: I am not sure about this. In principle both
    > solution should behave the same since we are using SW RAID but I don't
    > know how the fact that SCSI is a bus with overhead would affect RAID
    > performance ? What do you think ? Any ideas as to how to spread the
    > RAID 10 in a dual U 320 SCSI Scenario ?
    > SATA being Point-To-Point appears to have an edge again but your
    > thoghts are welcomed.
    >
    > - Would I get a considerable edge if I used 15k SCSI Drives ?

    In theory up to 40%.

    > I am not
    > totally convinced that the SATA is our best choice.

    Agree.

    > Any help is greatly
    > appreciated.
    >
    > Many thanks,
    >
    > Parsifal
    >
  2. Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

    Arno Wagner wrote:
    > In comp.sys.ibm.pc.hardware.storage lmanna@gmail.com wrote:

    >
    > One thing you can be relatively sure of is that the SCSI controller
    > will work well with the mainboard. Also Linux has a long history of
    > supporting SCSI, while SATA support is new and still being worked on.
    >
    > For you access scenario, SCSI will also be superior, since SCSI
    > has supported command queuing for a long time.
    >
    > I also would not trust the Raptors as I would trust SCSI drives.
    > The SCSI manufacturers know that SCSI customers expect high
    > reliability, while the Raptor is more a poor man's race car.


    My main concern is their novelty, rather then their performance. Call
    it a hunch but it just doesn't feel right to risk it while there's a
    proven solid SCSI solution for the same price.

    >
    > One more argument: You can put Config 2 on a 550W (redundant)
    > PSU, while Config 1 will need something significantly larger,

    Thanks for your comments. I forgot about the Power. Definitely worth
    considering since we're getting 3 of these servers and UPS sizing
    should also play in the cost equation.


    > also because SATA does not support staggered start-up, while
    > SCSI does. Is that already factored into the cost?

    This I don't follow, what's staggered start-up ?

    Parsifal


    >
    > Arno
  3. Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

    Peter wrote:
    [ Stuff Deleted ]
    > > - Number of Spindles: Solution 1 looks like it might have an edge
    here
    > > for small sequential reads and writes since there are just twice as
    > > many spindles.
    >
    > Yes, but Raptors have 226 IO/s vs. Fujitsu 269 IO/s.

    Yeap ! I like those Fujitsus and they are cheaper then the cheetahs.

    >
    > > - PCI Bus Saturation: Solution 1 also appears to have an edge in
    case
    > > we use large sequential reads. Solution 2 would be limited by the
    Dual
    > > SCSI bus bandwidth 640Gb. I doubt we would ever reach that level of
    > > bandwidth in any random-read or random-write situation and in our
    small
    > > random file scenario I think both system would perform equally. Any
    > > comments ?
    >
    > You are designing for NFS, right? Don't forget that network IO and
    > SCSI IO are on the same PCI-X 64bit 100MHz bus. Therefore available
    > throughput will be 800MB/s * 0.5 = 400MB/s

    Uhmm .. you're right. I guess I'll place a dual e1000 on the other
    PCI-X
    channel. See:

    ftp://ftp.tyan.com/datasheets/d_s2881_100.pdf


    >
    > In random operations, if you get 200 IO/s from each SCSI disk,
    > you will have 12disks * 200 IO/s * 64KB = 154MB/s
    >
    > > - MTBF: Solution 2 has a definite edge. Some numbers:
    > >
    > > MTBF1= 1 / ( 24* 1/1.2million + 2/1million ) = 45454.54 hours
    > >
    > > Raptor MTBF = 1,200,000 hours; 3Ware MTBF = 1,000,000 hours
    > >
    > > MTBF2= 1 / ( 12* 1/1.2million ) = 100,000 hours
    >
    > How did you calculated your total MTBF???
    > Your calcs maybe good for RAID0 but not for RAID10.

    Thanks for the correction. You're right again.

    >
    > Assuming 5 year period, for 1,200,000 hour MTBF disk
    > reliabilty is about 0.964.
    >
    > For RAID10 (stripe of mirrored drives) in 6x2 configuration
    > eqivalent MTBF will be 5,680,000 hours
    >
    > Assuming 5 year period, for 1,000,000 hour MTBF disk
    > reliabilty is about 0.957.
    >
    > For RAID10 (stripe of mirrored drives) in 12x2 configuration
    > eqivalent MTBF will be 2,000,000 hours
    >
    > For a single RAID1 of the 1,000,000 hr MTBF drives
    > equivalent MTBF will be 23,800,000 hours

    Excuse my ignorance but how did you get these numbers ? In any case
    your numbers show that MTBF with solution 1 is about 1/2 than solution
    2.

    >
    > BTW, 3Ware controllers are PCI 2.2 64bit 66MHz.
    > I can't believe that their MTBF is so low (1,000,000 hr)
    > I you loose one, probably your RAID will go down too.

    I thought it was a bit too low too but there was no info on the 3ware
    site.

    >
    > > Not surprisingly Solution 2 is twice as reliabe. This doesn't
    take
    > > into account the novelty of the SATA Raptor drive and the proven
    track
    > > record of the SCSI solution. In any case comments on this MTBF
    point
    > > are welcomed.
    > >
    > > - RAID Performance: I am not sure about this. In principle both
    > > solution should behave the same since we are using SW RAID but I
    don't
    > > know how the fact that SCSI is a bus with overhead would affect
    RAID
    > > performance ? What do you think ? Any ideas as to how to spread
    the
    > > RAID 10 in a dual U 320 SCSI Scenario ?
    > > SATA being Point-To-Point appears to have an edge again but your
    > > thoghts are welcomed.
    > >
    > > - Would I get a considerable edge if I used 15k SCSI Drives ?
    >
    > In theory up to 40%.

    In reality though I would say 25-35%

    >
    > > I am not
    > > totally convinced that the SATA is our best choice.
    >
    > Agree.

    Thanks !

    >
    > > Any help is greatly
    > > appreciated.
    > >
    > > Many thanks,
    > >
    > > Parsifal
    > >
  4. Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

    J. Clarke wrote:
    > Arno Wagner wrote:
    >
    > > In comp.sys.ibm.pc.hardware.storage lmanna@gmail.com wrote:
    > >> Hello all,
    > >
    > >> Which of the two following architectures would you choose for a
    > >> high-perf NFS server in a cluster env. Most of our data ( 80% ) is
    > >> small ( < 64 kb ) files. Reads and Writes are similar and mostly
    random
    > >> in nature:
    > >
    > >> Architecture 1:
    > >> Tyan 2882
    > >> 2xOpteron 246
    > >> 4 GB RAM
    > >> 2x80Gb SATA ( System )
    > >> 2x12-Way 3Ware Cards
    > >> 24 73 GB 10k rpm Western Digital Raptors
    > >> Software RAID 10 on Linux 2.6.x
    > >> XFS
    > >
    > >> Architecture 2:
    > >> Tyan 2881 with Dual U-320 SCSI
    > >> 2xOpteron 246
    > >> 4 GB RAM
    > >> 2x80Gb SATA ( System )
    > >> 12x146Gb Fujitsu 10k SCSI
    > >> Software RAID 10 on Linux
    > >> XFS
    > >
    > >> The price for both system is almost the same. Considerations:
    > >
    > >> - Number of Spindles: Solution 1 looks like it might have an edge
    here
    > >> for small sequential reads and writes since there are just twice
    as
    > >> many spindles.
    > >
    > >> - PCI Bus Saturation: Solution 1 also appears to have an edge in
    case
    > >> we use large sequential reads. Solution 2 would be limited by the
    Dual
    > >> SCSI bus bandwidth 640Gb. I doubt we would ever reach that level
    of
    > >> bandwidth in any random-read or random-write situation and in our
    small
    > >> random file scenario I think both system would perform equally.
    Any
    > >> comments ?
    > >
    > >> - MTBF: Solution 2 has a definite edge. Some numbers:
    > >
    > >> MTBF1= 1 / ( 24* 1/1.2million + 2/1million ) = 45454.54 hours
    > >
    > >> Raptor MTBF = 1,200,000 hours; 3Ware MTBF = 1,000,000 hours
    > >
    > >> MTBF2= 1 / ( 12* 1/1.2million ) = 100,000 hours
    > >
    > >> Not surprisingly Solution 2 is twice as reliabe. This doesn't
    take
    > >> into account the novelty of the SATA Raptor drive and the proven
    track
    > >> record of the SCSI solution. In any case comments on this MTBF
    point
    > >> are welcomed.
    > >
    > >> - RAID Performance: I am not sure about this. In principle both
    > >> solution should behave the same since we are using SW RAID but I
    don't
    > >> know how the fact that SCSI is a bus with overhead would affect
    RAID
    > >> performance ? What do you think ? Any ideas as to how to spread
    the
    > >> RAID 10 in a dual U 320 SCSI Scenario ?
    > >> SATA being Point-To-Point appears to have an edge again but your
    > >> thoghts are welcomed.
    > >
    > >> - Would I get a considerable edge if I used 15k SCSI Drives ? I am
    not
    > >> totally convinced that the SATA is our best choice. Any help is
    greatly
    > >> appreciated.
    > >
    > > One thing you can be relatively sure of is that the SCSI controller
    > > will work well with the mainboard. Also Linux has a long history of
    > > supporting SCSI, while SATA support is new and still being worked
    on.
    >
    > If he's using 3ware host adapters then "SATA support" is not an
    > issue--that's handled by the processor on the host adapter and all
    that the
    > Linux driver does is give commands to that processor.
    >
    > Do you have any evidence to present that suggests that 3ware RAID
    > controllers have problems with any known mainboard?
    >
    > > For you access scenario, SCSI will also be superior, since SCSI
    > > has supported command queuing for a long time.
    >
    > I'm sorry, but it doesn't follow that because SCSI has supported
    command
    > queuing for a long time that the performance will be superior.
    >
    > > I also would not trust the Raptors as I would trust SCSI drives.
    > > The SCSI manufacturers know that SCSI customers expect high
    > > reliability, while the Raptor is more a poor man's race car.
    >
    > Actually a Raptor is an enterprise SCSI drive with an SATA chip on it
    > instead of a SCSI chip on it. The Raptors aren't "poor man's"
    _anything_,
    > they're Western Digital's enterprise drive. WD has chosen to take a
    risk
    > and make their enterprise line with SATA instead of SCSI. Are you
    > suggesting that WD is incapable of producing a reliable drive?
    >
    > If it was a Seagate Cheetah with an SATA chip would you say that it
    was
    > going to be unreliable?
    >
    > > One more argument: You can put Config 2 on a 550W (redundant)
    > > PSU, while Config 1 will need something significantly larger,
    > > also because SATA does not support staggered start-up, while
    > > SCSI does. Is that already factored into the cost?
    >
    > Uh, SATA requires one host interface for each drive. Whatever
    processor is
    > controlling those host interfaces can most assuredly stagger the
    startup if
    > that is an issue.
    >
    > Not saying that SCSI is not the superior solution but the reasons
    given seem
    > to be ignoring the fact that a "smart" SATA RAID controller is being
    > compared with a "dumb" SCSI setup.


    Good point. Would the SCSI performance improve if I used a Dual U-320
    super duper SCSI RAID card ? Since the RAID was going to be in SW
    anyways I didn't see the reason of getting such a card. I had no other
    choice with the SATA solution though.

    Parsifal

    >
    > > Arno
    >
    > --
    > --John
    > to email, dial "usenet" and validate
    > (was jclarke at eye bee em dot net)
  5. Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

    In comp.sys.ibm.pc.hardware.storage lmanna@gmail.com wrote:
    > Hello all,

    > Which of the two following architectures would you choose for a
    > high-perf NFS server in a cluster env. Most of our data ( 80% ) is
    > small ( < 64 kb ) files. Reads and Writes are similar and mostly random
    > in nature:

    > Architecture 1:
    > Tyan 2882
    > 2xOpteron 246
    > 4 GB RAM
    > 2x80Gb SATA ( System )
    > 2x12-Way 3Ware Cards
    > 24 73 GB 10k rpm Western Digital Raptors
    > Software RAID 10 on Linux 2.6.x
    > XFS

    > Architecture 2:
    > Tyan 2881 with Dual U-320 SCSI
    > 2xOpteron 246
    > 4 GB RAM
    > 2x80Gb SATA ( System )
    > 12x146Gb Fujitsu 10k SCSI
    > Software RAID 10 on Linux
    > XFS

    > The price for both system is almost the same. Considerations:

    > - Number of Spindles: Solution 1 looks like it might have an edge here
    > for small sequential reads and writes since there are just twice as
    > many spindles.

    > - PCI Bus Saturation: Solution 1 also appears to have an edge in case
    > we use large sequential reads. Solution 2 would be limited by the Dual
    > SCSI bus bandwidth 640Gb. I doubt we would ever reach that level of
    > bandwidth in any random-read or random-write situation and in our small
    > random file scenario I think both system would perform equally. Any
    > comments ?

    > - MTBF: Solution 2 has a definite edge. Some numbers:

    > MTBF1= 1 / ( 24* 1/1.2million + 2/1million ) = 45454.54 hours

    > Raptor MTBF = 1,200,000 hours; 3Ware MTBF = 1,000,000 hours

    > MTBF2= 1 / ( 12* 1/1.2million ) = 100,000 hours

    > Not surprisingly Solution 2 is twice as reliabe. This doesn't take
    > into account the novelty of the SATA Raptor drive and the proven track
    > record of the SCSI solution. In any case comments on this MTBF point
    > are welcomed.

    > - RAID Performance: I am not sure about this. In principle both
    > solution should behave the same since we are using SW RAID but I don't
    > know how the fact that SCSI is a bus with overhead would affect RAID
    > performance ? What do you think ? Any ideas as to how to spread the
    > RAID 10 in a dual U 320 SCSI Scenario ?
    > SATA being Point-To-Point appears to have an edge again but your
    > thoghts are welcomed.

    > - Would I get a considerable edge if I used 15k SCSI Drives ? I am not
    > totally convinced that the SATA is our best choice. Any help is greatly
    > appreciated.

    One thing you can be relatively sure of is that the SCSI controller
    will work well with the mainboard. Also Linux has a long history of
    supporting SCSI, while SATA support is new and still being worked on.

    For you access scenario, SCSI will also be superior, since SCSI
    has supported command queuing for a long time.

    I also would not trust the Raptors as I would trust SCSI drives.
    The SCSI manufacturers know that SCSI customers expect high
    reliability, while the Raptor is more a poor man's race car.

    One more argument: You can put Config 2 on a 550W (redundant)
    PSU, while Config 1 will need something significantly larger,
    also because SATA does not support staggered start-up, while
    SCSI does. Is that already factored into the cost?

    Arno
  6. Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

    Arno Wagner wrote:

    > In comp.sys.ibm.pc.hardware.storage lmanna@gmail.com wrote:
    >> Hello all,
    >
    >> Which of the two following architectures would you choose for a
    >> high-perf NFS server in a cluster env. Most of our data ( 80% ) is
    >> small ( < 64 kb ) files. Reads and Writes are similar and mostly random
    >> in nature:
    >
    >> Architecture 1:
    >> Tyan 2882
    >> 2xOpteron 246
    >> 4 GB RAM
    >> 2x80Gb SATA ( System )
    >> 2x12-Way 3Ware Cards
    >> 24 73 GB 10k rpm Western Digital Raptors
    >> Software RAID 10 on Linux 2.6.x
    >> XFS
    >
    >> Architecture 2:
    >> Tyan 2881 with Dual U-320 SCSI
    >> 2xOpteron 246
    >> 4 GB RAM
    >> 2x80Gb SATA ( System )
    >> 12x146Gb Fujitsu 10k SCSI
    >> Software RAID 10 on Linux
    >> XFS
    >
    >> The price for both system is almost the same. Considerations:
    >
    >> - Number of Spindles: Solution 1 looks like it might have an edge here
    >> for small sequential reads and writes since there are just twice as
    >> many spindles.
    >
    >> - PCI Bus Saturation: Solution 1 also appears to have an edge in case
    >> we use large sequential reads. Solution 2 would be limited by the Dual
    >> SCSI bus bandwidth 640Gb. I doubt we would ever reach that level of
    >> bandwidth in any random-read or random-write situation and in our small
    >> random file scenario I think both system would perform equally. Any
    >> comments ?
    >
    >> - MTBF: Solution 2 has a definite edge. Some numbers:
    >
    >> MTBF1= 1 / ( 24* 1/1.2million + 2/1million ) = 45454.54 hours
    >
    >> Raptor MTBF = 1,200,000 hours; 3Ware MTBF = 1,000,000 hours
    >
    >> MTBF2= 1 / ( 12* 1/1.2million ) = 100,000 hours
    >
    >> Not surprisingly Solution 2 is twice as reliabe. This doesn't take
    >> into account the novelty of the SATA Raptor drive and the proven track
    >> record of the SCSI solution. In any case comments on this MTBF point
    >> are welcomed.
    >
    >> - RAID Performance: I am not sure about this. In principle both
    >> solution should behave the same since we are using SW RAID but I don't
    >> know how the fact that SCSI is a bus with overhead would affect RAID
    >> performance ? What do you think ? Any ideas as to how to spread the
    >> RAID 10 in a dual U 320 SCSI Scenario ?
    >> SATA being Point-To-Point appears to have an edge again but your
    >> thoghts are welcomed.
    >
    >> - Would I get a considerable edge if I used 15k SCSI Drives ? I am not
    >> totally convinced that the SATA is our best choice. Any help is greatly
    >> appreciated.
    >
    > One thing you can be relatively sure of is that the SCSI controller
    > will work well with the mainboard. Also Linux has a long history of
    > supporting SCSI, while SATA support is new and still being worked on.

    If he's using 3ware host adapters then "SATA support" is not an
    issue--that's handled by the processor on the host adapter and all that the
    Linux driver does is give commands to that processor.

    Do you have any evidence to present that suggests that 3ware RAID
    controllers have problems with any known mainboard?

    > For you access scenario, SCSI will also be superior, since SCSI
    > has supported command queuing for a long time.

    I'm sorry, but it doesn't follow that because SCSI has supported command
    queuing for a long time that the performance will be superior.

    > I also would not trust the Raptors as I would trust SCSI drives.
    > The SCSI manufacturers know that SCSI customers expect high
    > reliability, while the Raptor is more a poor man's race car.

    Actually a Raptor is an enterprise SCSI drive with an SATA chip on it
    instead of a SCSI chip on it. The Raptors aren't "poor man's" _anything_,
    they're Western Digital's enterprise drive. WD has chosen to take a risk
    and make their enterprise line with SATA instead of SCSI. Are you
    suggesting that WD is incapable of producing a reliable drive?

    If it was a Seagate Cheetah with an SATA chip would you say that it was
    going to be unreliable?

    > One more argument: You can put Config 2 on a 550W (redundant)
    > PSU, while Config 1 will need something significantly larger,
    > also because SATA does not support staggered start-up, while
    > SCSI does. Is that already factored into the cost?

    Uh, SATA requires one host interface for each drive. Whatever processor is
    controlling those host interfaces can most assuredly stagger the startup if
    that is an issue.

    Not saying that SCSI is not the superior solution but the reasons given seem
    to be ignoring the fact that a "smart" SATA RAID controller is being
    compared with a "dumb" SCSI setup.

    > Arno

    --
    --John
    to email, dial "usenet" and validate
    (was jclarke at eye bee em dot net)
  7. Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

    lmanna@gmail.com wrote:
    > Hello all,
    >
    > Which of the two following architectures would you choose for a
    > high-perf NFS server in a cluster env. Most of our data ( 80% ) is
    > small ( < 64 kb ) files. Reads and Writes are similar and mostly
    > random in nature:

    I wouldn't use either one of them since your major flaw would be using an
    Opteron when you should only be using Xeon or Itanium2 processors. Now, if
    you are just putting an MP3 server in the basement of your home for
    light-duty work you can squeak by with the Opterons. As for the drives, I
    would only use SCSI in the system you mention.


    Rita
  8. Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

    On 26 Mar 2005 01:01:12 -0800, lmanna@gmail.com wrote:


    >> also because SATA does not support staggered start-up, while
    >> SCSI does. Is that already factored into the cost?
    >
    > This I don't follow, what's staggered start-up ?
    >

    It is a feature that staggers the spinup of each disk sequentially
    leaving enough time between disk starts to prevent overloading the
    power supply. I think he meant that because he believed SATA does not
    do this you would need a beefier power supply than you would with the
    scsi setup to avoid problems on powerup.

    AFAIK delay start or staggered spinup (whatever you want to call it)
    is available on SATA but it is controller specific (& most don't
    support it) and it is not a standard feature like the delay start &
    remote start jumpers on scsi drives & backplanes.
  9. Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

    Opteron is not a processor to be taken seriously ???? Any backing
    with hard numbers for what you're saying ? We have a whole 64-node dual
    opteron cluster running 64-bit applications for more than a year and
    it's been not only reliable but given the nature of our applications
    crucial in a time when Intel was sleeping in their 32-bit laurels and
    convincing the industry and neophytes that 64-bit equals Itanium only.
    I applaud AMD for their screw-intel approach giving floks like us a
    great cost-effective 64 bit option. If the Opteron wasn't succesfull
    Intel would have never come up with the 64-bit Xeon, their mantra would
    have been "Buy Itanium". Have you tried to cost out a 64-node dual
    Itanic lately ?? Moreover, our current file-servers are Xeon based and
    we don't feel confident on their running 64-bit OS and/or XFS.

    The only consideration I had for the Xeons was their wider choice of
    mobo availability, and the new boards with 4x, 8x and 16x PCI-Express
    options which might prevent PCI bus saturation in some extreme video
    streaming or large sequential reads applications, which is not the case
    in our scenario. You might also need 10GB ethernet to cope with such
    data stream.

    Parsifal
  10. Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

    In comp.sys.ibm.pc.hardware.storage J. Clarke <jclarke.usenet@snet.net.invalid> wrote:
    > Arno Wagner wrote:

    >> In comp.sys.ibm.pc.hardware.storage lmanna@gmail.com wrote:
    >>> Hello all,
    [...]
    >> One thing you can be relatively sure of is that the SCSI controller
    >> will work well with the mainboard. Also Linux has a long history of
    >> supporting SCSI, while SATA support is new and still being worked on.

    > If he's using 3ware host adapters then "SATA support" is not an
    > issue--that's handled by the processor on the host adapter and all that the
    > Linux driver does is give commands to that processor.

    > Do you have any evidence to present that suggests that 3ware RAID
    > controllers have problems with any known mainboard?

    No. I was mostly thinking of SMART support, which is not there
    for SATA on Linux (unless you use the old IDE driver). Normal disk
    access works fine in my experience.

    >> For you access scenario, SCSI will also be superior, since SCSI
    >> has supported command queuing for a long time.

    > I'm sorry, but it doesn't follow that because SCSI has supported command
    > queuing for a long time that the performance will be superior.

    Actually for small reads command queuing helps massively. The
    "has been available for a long time" just means that it will work.

    >> I also would not trust the Raptors as I would trust SCSI drives.
    >> The SCSI manufacturers know that SCSI customers expect high
    >> reliability, while the Raptor is more a poor man's race car.

    > Actually a Raptor is an enterprise SCSI drive with an SATA chip on it
    > instead of a SCSI chip on it. The Raptors aren't "poor man's" _anything_,
    > they're Western Digital's enterprise drive. WD has chosen to take a risk
    > and make their enterprise line with SATA instead of SCSI. Are you
    > suggesting that WD is incapable of producing a reliable drive?

    I am suggesting that WDs strategy is suspicious. It may be up
    to SCSI standards, but I have doubts. SATA is far to new to compete
    with SCSI on reliability and compatibility. And SCSI has a lot of
    features working for decades now that are still being implemented
    or are being planned for SATA.

    > If it was a Seagate Cheetah with an SATA chip would you say that it was
    > going to be unreliable?

    At least not as reliable as SCSI. The whole SATA technology is not as
    mature as SCSI is. It is also not as well designed.

    >> One more argument: You can put Config 2 on a 550W (redundant)
    >> PSU, while Config 1 will need something significantly larger,
    >> also because SATA does not support staggered start-up, while
    >> SCSI does. Is that already factored into the cost?

    > Uh, SATA requires one host interface for each drive. Whatever processor is
    > controlling those host interfaces can most assuredly stagger the startup if
    > that is an issue.

    The problem is that most (all?) SATA disks start themselves, while
    in SCSI that is usually a jumper-option. Typical is auto-start,
    auto-start with a selectable delay and no auto-start. On SATA
    you would have to to staggered power or the like to get the same
    effect.

    > Not saying that SCSI is not the superior solution but the reasons
    > given seem to be ignoring the fact that a "smart" SATA RAID
    > controller is being compared with a "dumb" SCSI setup.

    Not really. It is more a relatively new, supposedly smart technology
    against a proven, older, reliable, knowen to be smart technology.
    SCSI targets are really quite smart, while SATA targets are not too
    bright. The 3ware controllers may help some, but I doubt they
    can do that much.

    In addition the kernel knows how to talk to SCSI targets, while SATA is
    still in flux. Data transfer on SATA works, but everything else is
    still being worked on, like SMART support.

    The RAID logic is pretty smart in both cases, since done by the
    kernel, but when having this many disks you _will_ want to
    poll defective lists/counts. drive temperature and the like
    periodically to get early warnings.

    Arno
  11. Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

    Arno Wagner wrote:

    > In comp.sys.ibm.pc.hardware.storage J. Clarke
    > <jclarke.usenet@snet.net.invalid> wrote:
    >> Arno Wagner wrote:
    >
    >>> In comp.sys.ibm.pc.hardware.storage lmanna@gmail.com wrote:
    >>>> Hello all,
    > [...]
    >>> One thing you can be relatively sure of is that the SCSI controller
    >>> will work well with the mainboard. Also Linux has a long history of
    >>> supporting SCSI, while SATA support is new and still being worked on.
    >
    >> If he's using 3ware host adapters then "SATA support" is not an
    >> issue--that's handled by the processor on the host adapter and all that
    >> the Linux driver does is give commands to that processor.
    >
    >> Do you have any evidence to present that suggests that 3ware RAID
    >> controllers have problems with any known mainboard?
    >
    > No. I was mostly thinking of SMART support, which is not there
    > for SATA on Linux (unless you use the old IDE driver). Normal disk
    > access works fine in my experience.

    Actually, that would be a function of the 3ware drivers. With a 3ware host
    adapter you do not use the SATA drivers, you use drivers specific to 3ware,
    and the 3ware drivers _do_ support SMART under Linux.

    >>> For you access scenario, SCSI will also be superior, since SCSI
    >>> has supported command queuing for a long time.
    >
    >> I'm sorry, but it doesn't follow that because SCSI has supported command
    >> queuing for a long time that the performance will be superior.
    >
    > Actually for small reads command queuing helps massively. The
    > "has been available for a long time" just means that it will work.

    So where is the evidence that SCSI command queuing works better for small
    reads than does SATA command queuing? In the absence of other evidence one
    might assume that SATA command queuing benefits from "lessons learned" with
    SCSI.

    >>> I also would not trust the Raptors as I would trust SCSI drives.
    >>> The SCSI manufacturers know that SCSI customers expect high
    >>> reliability, while the Raptor is more a poor man's race car.
    >
    >> Actually a Raptor is an enterprise SCSI drive with an SATA chip on it
    >> instead of a SCSI chip on it. The Raptors aren't "poor man's"
    >> _anything_,
    >> they're Western Digital's enterprise drive. WD has chosen to take a risk
    >> and make their enterprise line with SATA instead of SCSI. Are you
    >> suggesting that WD is incapable of producing a reliable drive?
    >
    > I am suggesting that WDs strategy is suspicious.

    Why? They see SATA as the coming thing. Are you suggesting that Western
    Digital is incapable of producing a SCSI drive?

    > It may be up
    > to SCSI standards, but I have doubts. SATA is far to new to compete
    > with SCSI on reliability

    Reliability in a disk is primarily a function of the mechanical components,
    not the interface. It is quite possible to put a bridge-chip on a Cheetah
    that carries the existing SCSI interface into an SATA interface. Would
    that drive then be less reliable than the Cheetah that was not plugged into
    a bridge chip? Or are you suggesting that the state of the art in the
    manufacture of integrated circuits is such that for some reason a chip
    containing the circuits that support SATA is more likely to fail in service
    than one that contains the circuits that support SCSI?

    > and compatibility. And SCSI has a lot of
    > features working for decades now that are still being implemented
    > or are being planned for SATA.

    Such as?

    >> If it was a Seagate Cheetah with an SATA chip would you say that it was
    >> going to be unreliable?
    >
    > At least not as reliable as SCSI. The whole SATA technology is not as
    > mature as SCSI is. It is also not as well designed.

    In what specific ways?

    >>> One more argument: You can put Config 2 on a 550W (redundant)
    >>> PSU, while Config 1 will need something significantly larger,
    >>> also because SATA does not support staggered start-up, while
    >>> SCSI does. Is that already factored into the cost?
    >
    >> Uh, SATA requires one host interface for each drive. Whatever processor
    >> is controlling those host interfaces can most assuredly stagger the
    >> startup if that is an issue.
    >
    > The problem is that most (all?) SATA disks start themselves,

    Raptors have a jumper that selects startup in full power mode or startup in
    standby, intended specifically to address this issue.

    > while
    > in SCSI that is usually a jumper-option. Typical is auto-start,
    > auto-start with a selectable delay and no auto-start. On SATA
    > you would have to to staggered power or the like to get the same
    > effect.

    Just tell the drive to come out of standby whenever you are ready.

    >> Not saying that SCSI is not the superior solution but the reasons
    >> given seem to be ignoring the fact that a "smart" SATA RAID
    >> controller is being compared with a "dumb" SCSI setup.
    >
    > Not really. It is more a relatively new, supposedly smart technology
    > against a proven, older, reliable, knowen to be smart technology.
    > SCSI targets are really quite smart, while SATA targets are not too
    > bright. The 3ware controllers may help some, but I doubt they
    > can do that much.

    You have made enough statements about SATA that are simply not true that I
    wonder at the validity of your assessment.

    > In addition the kernel knows how to talk to SCSI targets, while SATA is
    > still in flux. Data transfer on SATA works, but everything else is
    > still being worked on, like SMART support.

    So let's see, you'd favor the use of a brand new LSI Logic SCSI RAID
    controller over a brand new LSI Logic SATA RAID controller because "the
    kernel knows how to talk to SCSI targets" despite the fact that both
    devices use brand new drivers?

    You're assuming that all contact with drives is via the SCSI or SATA kernel
    drivers and not through a dedicated controller with drivers specific to
    that controller.

    > The RAID logic is pretty smart in both cases, since done by the
    > kernel, but when having this many disks you _will_ want to
    > poll defective lists/counts. drive temperature and the like
    > periodically to get early warnings.

    With the 3ware host adapter, the RAID logic is ON THE BOARD, _not_ in the
    kernel.

    The same is true for SATA RAID controllers from LSI Logic, Intel, Tekram,
    and several other vendors.

    > Arno

    --
    --John
    to email, dial "usenet" and validate
    (was jclarke at eye bee em dot net)
  12. Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

    Previously lmanna@gmail.com wrote:

    > J. Clarke wrote:
    >> Arno Wagner wrote:
    [...]
    >> Uh, SATA requires one host interface for each drive. Whatever
    > processor is
    >> controlling those host interfaces can most assuredly stagger the
    > startup if
    >> that is an issue.
    >>
    >> Not saying that SCSI is not the superior solution but the reasons
    > given seem
    >> to be ignoring the fact that a "smart" SATA RAID controller is being
    >> compared with a "dumb" SCSI setup.


    > Good point. Would the SCSI performance improve if I used a Dual U-320
    > super duper SCSI RAID card ? Since the RAID was going to be in SW
    > anyways I didn't see the reason of getting such a card. I had no other
    > choice with the SATA solution though.

    Don't think so. Your set-up will spend most time waiting for seeks
    and rotational latency anyways IMO. Maybe if you put the RAID1
    mirrors on separate channels that would bring some write speed
    improvements.

    Arno
  13. Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

    In comp.sys.ibm.pc.hardware.storage lmanna@gmail.com wrote:
    > Arno Wagner wrote:
    >> In comp.sys.ibm.pc.hardware.storage lmanna@gmail.com wrote:

    >>
    >> One thing you can be relatively sure of is that the SCSI controller
    >> will work well with the mainboard. Also Linux has a long history of
    >> supporting SCSI, while SATA support is new and still being worked on.
    >>
    >> For you access scenario, SCSI will also be superior, since SCSI
    >> has supported command queuing for a long time.
    >>
    >> I also would not trust the Raptors as I would trust SCSI drives.
    >> The SCSI manufacturers know that SCSI customers expect high
    >> reliability, while the Raptor is more a poor man's race car.


    > My main concern is their novelty, rather then their performance. Call
    > it a hunch but it just doesn't feel right to risk it while there's a
    > proven solid SCSI solution for the same price.

    >>
    >> One more argument: You can put Config 2 on a 550W (redundant)
    >> PSU, while Config 1 will need something significantly larger,

    > Thanks for your comments. I forgot about the Power. Definitely worth
    > considering since we're getting 3 of these servers and UPS sizing
    > should also play in the cost equation.

    Power is critical to reliability. If you have a PSU with, say
    50% normal and 70% peak load, that is massively more reliable than
    one with 70%/100%. Also many PSUs die on start-up, since e.g.
    disks draw their peak currents on spindle start.

    >> also because SATA does not support staggered start-up, while
    >> SCSI does. Is that already factored into the cost?

    > This I don't follow, what's staggered start-up ?

    You can jumper most (all?) SCSI drive do delay their spindle-start.
    Spindle start results in a massive amount of poerrt drawn for some
    seconds. Maybe as much as 2-3 times the peaks you see during operation.

    SCSI drives can be jumperd to spin-up on power-on or on receiving
    a start-unit command. Some also support delays. You should be
    able to set the SCSI controller to issue the start-unit command
    to the drives with, say, 5 seconds delay between each unit or so.
    This massively reduces power drawn on start-up.

    SATA drives all (?) do spin-up on power-on. It is a problem
    when you have many disks. The PSU needs the reserves to deal
    with this worst case.

    Arno
  14. Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

    Arno Wagner wrote:

    > In comp.sys.ibm.pc.hardware.storage lmanna@gmail.com wrote:
    >> Arno Wagner wrote:
    >>> In comp.sys.ibm.pc.hardware.storage lmanna@gmail.com wrote:
    >
    >>>
    >>> One thing you can be relatively sure of is that the SCSI controller
    >>> will work well with the mainboard. Also Linux has a long history of
    >>> supporting SCSI, while SATA support is new and still being worked on.
    >>>
    >>> For you access scenario, SCSI will also be superior, since SCSI
    >>> has supported command queuing for a long time.
    >>>
    >>> I also would not trust the Raptors as I would trust SCSI drives.
    >>> The SCSI manufacturers know that SCSI customers expect high
    >>> reliability, while the Raptor is more a poor man's race car.
    >
    >
    >> My main concern is their novelty, rather then their performance. Call
    >> it a hunch but it just doesn't feel right to risk it while there's a
    >> proven solid SCSI solution for the same price.
    >
    >>>
    >>> One more argument: You can put Config 2 on a 550W (redundant)
    >>> PSU, while Config 1 will need something significantly larger,
    >
    >> Thanks for your comments. I forgot about the Power. Definitely worth
    >> considering since we're getting 3 of these servers and UPS sizing
    >> should also play in the cost equation.
    >
    > Power is critical to reliability. If you have a PSU with, say
    > 50% normal and 70% peak load, that is massively more reliable than
    > one with 70%/100%. Also many PSUs die on start-up, since e.g.
    > disks draw their peak currents on spindle start.
    >
    >>> also because SATA does not support staggered start-up, while
    >>> SCSI does. Is that already factored into the cost?
    >
    >> This I don't follow, what's staggered start-up ?
    >
    > You can jumper most (all?) SCSI drive do delay their spindle-start.
    > Spindle start results in a massive amount of poerrt drawn for some
    > seconds. Maybe as much as 2-3 times the peaks you see during operation.
    >
    > SCSI drives can be jumperd to spin-up on power-on or on receiving
    > a start-unit command. Some also support delays. You should be
    > able to set the SCSI controller to issue the start-unit command
    > to the drives with, say, 5 seconds delay between each unit or so.
    > This massively reduces power drawn on start-up.
    >
    > SATA drives all (?) do spin-up on power-on. It is a problem
    > when you have many disks. The PSU needs the reserves to deal
    > with this worst case.

    Would you do the world a favor and actually take ten minutes to research
    your statements before you make them? All SATA drives sold as "enterprise"
    drives have the ability to perform staggered spinup.

    > Arno

    --
    --John
    to email, dial "usenet" and validate
    (was jclarke at eye bee em dot net)
  15. Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

    In comp.sys.ibm.pc.hardware.storage "Rita Ä Berkowitz" <ritaberk2O04 @aol.com> wrote:
    > lmanna@gmail.com wrote:
    >> Hello all,
    >>
    >> Which of the two following architectures would you choose for a
    >> high-perf NFS server in a cluster env. Most of our data ( 80% ) is
    >> small ( < 64 kb ) files. Reads and Writes are similar and mostly
    >> random in nature:

    > I wouldn't use either one of them since your major flaw would be using an
    > Opteron when you should only be using Xeon or Itanium2 processors.

    Sorry, but that is BS. Itanium is mostly dead technology and not
    really developed anymore. It is also massively over-priced. Xeons are
    sort of not-quite 64 bit CPUs, that have the main characteristic of
    being Intel and expensive.

    I also know of no indications (except marketing BS by Intel) that
    Opterons are unreliable.

    Arno
  16. Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

    Arno Wagner wrote:

    > Sorry, but that is BS. Itanium is mostly dead technology and not
    > really developed anymore. It is also massively over-priced. Xeons are
    > sort of not-quite 64 bit CPUs, that have the main characteristic of
    > being Intel and expensive.

    You need to catch up with the times. You are correct about the original
    Itaniums being dogs, but I'm talking about the new Itanium2 processors,
    which are also 64-bit. As for Intel being expensive, you get what you pay
    for. The new Itanium2 sytems are SWEEEEEEET!

    > I also know of no indications (except marketing BS by Intel) that
    > Opterons are unreliable.

    It's being proven in the field daily. You simple don't see Opteron based
    solutions being deployed by major commercial and governmental entities.
    True, there are a few *novelty* systems that use many Opteron processors,
    but they are merely a curiosity than the mainstream norm. That said, if I
    wanted a dirt-cheap gaming system I would opt for an Opteron based SATA box.


    Rita
  17. Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

    In comp.sys.ibm.pc.hardware.storage "Rita Ä Berkowitz" <ritaberk2O04 @aol.com> wrote:
    > Arno Wagner wrote:

    >> Sorry, but that is BS. Itanium is mostly dead technology and not
    >> really developed anymore. It is also massively over-priced. Xeons are
    >> sort of not-quite 64 bit CPUs, that have the main characteristic of
    >> being Intel and expensive.

    > You need to catch up with the times. You are correct about the original
    > Itaniums being dogs, but I'm talking about the new Itanium2 processors,
    > which are also 64-bit. As for Intel being expensive, you get what you pay
    > for. The new Itanium2 sytems are SWEEEEEEET!

    You recommend a _new_ product for its reliability????
    I don't think I need to comment on that.

    >> I also know of no indications (except marketing BS by Intel) that
    >> Opterons are unreliable.

    > It's being proven in the field daily. You simple don't see Opteron based
    > solutions being deployed by major commercial and governmental entities.

    Which is a direct result of Intels FUD and behind-the-scenes politics.
    In order to prove that something is unreliable it has to be used and
    fail. It being not used does not indicate unreliability. It just
    indicates "nobody gets fired for buying Intel".

    So nothing is actually proven about reliability (or lack of)
    of Opterons in the field.

    > True, there are a few *novelty* systems that use many Opteron
    > processors, but they are merely a curiosity than the mainstream
    > norm. That said, if I wanted a dirt-cheap gaming system I would opt
    > for an Opteron based SATA box.

    That is certainly true. As allways the question is to get the
    right balance for a specific application. If you have the money
    to buy the most expensive solution _and_ the clout to make the
    vendor not just rip you off, you certainly will get an andequate
    solution. But you will pay too much. Not all of us can afford
    to buy stuff the way the military does.

    Arno
  18. Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

    Rita Ä Berkowitz wrote:

    [nothing very significant]

    One really needs hip-boots to wade through the manure of these last few
    posts.

    1. Opteron systems have reliability comparable to Xeon systems, and if
    they lag Itanics by any margin at all it's not by much (Itanics do have
    a couple of additional internal RAS features that Opterons and Xeons
    lack, but the differences are not major ones).

    2. While Intel didn't do as excellent a job of adding 64-bit support to
    Xeons as AMD did with AMD64, once again the difference is not a dramatic
    one.

    3. The first Itanic wasn't just a dog, it was an absolute joke.
    McKinley and Madison are much more respectable but still consume
    inordinate amounts of power and are in general not performance-leading
    products: while the newest Madisons managed to regain a very small lead
    in SPECfp over POWER5 that's the only major benchmark they lead in (at
    least where the competition has bothered to show up: HP has done a fine
    job of carefully selecting specific benchmark niches which lacked such
    competition, though been a bit embarrassed in cases where it
    subsequently appeared), and Itanic often winds up not in second place
    but in third or even fourth behind POWER (not just POWER5 but often
    behind POWER4+ as well in commercial benchmarks), Opteron, Xeon, and/or
    SPARC64 - and for a year or so the top-of-the-line 1.5 GHz Madisons
    couldn't even beat the aging and orphaned previous-process-generation
    Alpha in SAP SD 2-tier, though they're now a bit ahead of it (this was
    the only commercial benchmark HP was willing to allow EV7 to compete in:
    it made Itanic look bad, but they needed it to beat the POWER4 score
    there).

    And that's for benchmarks, where the code has been profiled and
    optimized to within an inch of its life. Itanic is more dependent on
    such optimization to achieve a given level of performance than its more
    flexible out-of-order competition is, and hence falls farther behind
    their performance levels in real-world situations where much code is not
    so optimized.

    4. Nonetheless, Itanic is not an abandoned product. While its eventual
    success or failure is still to be determined, Intel is at least
    currently still pouring money, engineers, and time into it (though
    apparently not at quite the rate it was earlier: in the past year it's
    cut a new Itanic chipset from its plans which would have allowed faster
    bus speeds and axed a new Itanic core that the transplanted Alpha team
    was building for 2007, and what those engineers are now working may or
    not be Itanic-related).

    - bill
  19. Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

    Previously J. Clarke <jclarke.usenet@snet.net.invalid> wrote:
    > Arno Wagner wrote:

    >> In comp.sys.ibm.pc.hardware.storage J. Clarke
    >> <jclarke.usenet@snet.net.invalid> wrote:
    >>> Arno Wagner wrote:
    >>
    >>>> In comp.sys.ibm.pc.hardware.storage lmanna@gmail.com wrote:
    >>>>> Hello all,
    >> [...]
    >>>> One thing you can be relatively sure of is that the SCSI controller
    >>>> will work well with the mainboard. Also Linux has a long history of
    >>>> supporting SCSI, while SATA support is new and still being worked on.
    >>
    >>> If he's using 3ware host adapters then "SATA support" is not an
    >>> issue--that's handled by the processor on the host adapter and all that
    >>> the Linux driver does is give commands to that processor.
    >>
    >>> Do you have any evidence to present that suggests that 3ware RAID
    >>> controllers have problems with any known mainboard?
    >>
    >> No. I was mostly thinking of SMART support, which is not there
    >> for SATA on Linux (unless you use the old IDE driver). Normal disk
    >> access works fine in my experience.

    > Actually, that would be a function of the 3ware drivers. With a 3ware host
    > adapter you do not use the SATA drivers, you use drivers specific to 3ware,
    > and the 3ware drivers _do_ support SMART under Linux.

    And, does that work reliably and with the usual Linux tools,
    i.e. smartctl? Would kind of surprise me, since libata does
    not have smart support at all at the moment, since the ATA
    passthru opcodes have only very recently be defined by the
    SCSI T10 committee.

    >>>> For you access scenario, SCSI will also be superior, since SCSI
    >>>> has supported command queuing for a long time.
    >>
    >>> I'm sorry, but it doesn't follow that because SCSI has supported command
    >>> queuing for a long time that the performance will be superior.
    >>
    >> Actually for small reads command queuing helps massively. The
    >> "has been available for a long time" just means that it will work.

    > So where is the evidence that SCSI command queuing works better for small
    > reads than does SATA command queuing?

    At the moment there is no SATA command queuing under Linux, as you
    can quickly discover by looking at the Serial ATA (SATA) Linux
    software status report page here:

    http://linux.yyz.us/sata/software-status.html

    I was not saying that SATA queuing is worse. I was saying (or intended to)
    that SCSI has command queuing under Linux while SATA does not currently.

    [...]
    >> I am suggesting that WDs strategy is suspicious.

    > Why? They see SATA as the coming thing. Are you suggesting that Western
    > Digital is incapable of producing a SCSI drive?

    I am suggesting that WD is trying to create a market beween ATA
    and SCSI by claiming to be as good as SCSI with SATA prices. If
    it sounds to good to be true, it probably is.

    >> It may be up
    >> to SCSI standards, but I have doubts. SATA is far to new to compete
    >> with SCSI on reliability

    > Reliability in a disk is primarily a function of the mechanical components,
    > not the interface.

    It is a driver and software questtion with newer interfaces as well.
    I had numerous problems with SATA under Linux.

    [...]
    > Raptors have a jumper that selects startup in full power mode or startup in
    > standby, intended specifically to address this issue.

    Good. And does the 3ware controllers support staggered starts?

    >> while
    >> in SCSI that is usually a jumper-option. Typical is auto-start,
    >> auto-start with a selectable delay and no auto-start. On SATA
    >> you would have to to staggered power or the like to get the same
    >> effect.

    > Just tell the drive to come out of standby whenever you are ready.

    That should be sometheing the controller and the drive do. Id
    the OS does it, it can fail in numerous interessting ways.

    >>> Not saying that SCSI is not the superior solution but the reasons
    >>> given seem to be ignoring the fact that a "smart" SATA RAID
    >>> controller is being compared with a "dumb" SCSI setup.
    >>
    >> Not really. It is more a relatively new, supposedly smart technology
    >> against a proven, older, reliable, knowen to be smart technology.
    >> SCSI targets are really quite smart, while SATA targets are not too
    >> bright. The 3ware controllers may help some, but I doubt they
    >> can do that much.

    > You have made enough statements about SATA that are simply not true that I
    > wonder at the validity of your assessment.

    Of course you are free to do that. But I have 4TB or RAIDed storage
    under Linux, about half of which is SATA. And I did run in the problems
    I describe here.

    >> In addition the kernel knows how to talk to SCSI targets, while SATA is
    >> still in flux. Data transfer on SATA works, but everything else is
    >> still being worked on, like SMART support.

    > So let's see, you'd favor the use of a brand new LSI Logic SCSI RAID
    > controller over a brand new LSI Logic SATA RAID controller because "the
    > kernel knows how to talk to SCSI targets" despite the fact that both
    > devices use brand new drivers?

    You are talking about the LL drivers. There is an SCSI abstraction
    layer in the kernel as well as an SATA abstraction layer. The former
    is stable, proven and full-featured. The latter is pretty basic at
    the moment.

    To quote the maintainer:

    Basic Serial ATA support

    The "ATA host state machine", the core of the entire driver, is
    considered production-stable.

    The error handling is very simple, but at this stage that is an
    advantage. Error handling code anywhere is inevitably both complex and
    sorely under-tested. libata error handling is intentionally
    simple. Positives: Easy to review and verify correctness. Never data
    corruption. Negatives: if an error occurs, libata will simply send the
    error back the block layer. There are limited retries by the block
    layer, depending on the type of error, but there is never a bus reset.

    > You're assuming that all contact with drives is via the SCSI or SATA kernel
    > drivers and not through a dedicated controller with drivers specific to
    > that controller.

    See above. Also if specific drivers are needed for specific
    hardware, they tend to be less reliable because the user-base is
    smaller.

    >> The RAID logic is pretty smart in both cases, since done by the
    >> kernel, but when having this many disks you _will_ want to
    >> poll defective lists/counts. drive temperature and the like
    >> periodically to get early warnings.

    > With the 3ware host adapter, the RAID logic is ON THE BOARD, _not_ in the
    > kernel.

    Not in the set-up of the OP. You did read that, did you?

    Seems to me we have a misunderstanding here. If the OP
    wanted to do Hardware-RAID the assessment would look
    different.

    Arno
  20. Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

    In article <114b3ubcrc6am5e@news.supernews.com>,
    "Rita Ä Berkowitz" <ritaberk2O04 @aol.com> wrote:

    > Arno Wagner wrote:
    >
    > > Sorry, but that is BS. Itanium is mostly dead technology and not
    > > really developed anymore. It is also massively over-priced. Xeons are
    > > sort of not-quite 64 bit CPUs, that have the main characteristic of
    > > being Intel and expensive.
    >
    > You need to catch up with the times. You are correct about the original
    > Itaniums being dogs, but I'm talking about the new Itanium2 processors,
    > which are also 64-bit. As for Intel being expensive, you get what you pay
    > for. The new Itanium2 sytems are SWEEEEEEET!
    >
    > > I also know of no indications (except marketing BS by Intel) that
    > > Opterons are unreliable.
    >
    > It's being proven in the field daily. You simple don't see Opteron based
    > solutions being deployed by major commercial and governmental entities.
    > True, there are a few *novelty* systems that use many Opteron processors,
    > but they are merely a curiosity than the mainstream norm. That said, if I
    > wanted a dirt-cheap gaming system I would opt for an Opteron based SATA box.

    April Fool's a week early?
  21. Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

    In comp.sys.ibm.pc.hardware.storage J. Clarke <jclarke.usenet@snet.net.invalid> wrote:
    > Arno Wagner wrote:

    >> In comp.sys.ibm.pc.hardware.storage lmanna@gmail.com wrote:
    >>> Arno Wagner wrote:
    >>>> In comp.sys.ibm.pc.hardware.storage lmanna@gmail.com wrote:
    >>
    >>>>
    >>>> One thing you can be relatively sure of is that the SCSI controller
    >>>> will work well with the mainboard. Also Linux has a long history of
    >>>> supporting SCSI, while SATA support is new and still being worked on.
    >>>>
    >>>> For you access scenario, SCSI will also be superior, since SCSI
    >>>> has supported command queuing for a long time.
    >>>>
    >>>> I also would not trust the Raptors as I would trust SCSI drives.
    >>>> The SCSI manufacturers know that SCSI customers expect high
    >>>> reliability, while the Raptor is more a poor man's race car.
    >>
    >>
    >>> My main concern is their novelty, rather then their performance. Call
    >>> it a hunch but it just doesn't feel right to risk it while there's a
    >>> proven solid SCSI solution for the same price.
    >>
    >>>>
    >>>> One more argument: You can put Config 2 on a 550W (redundant)
    >>>> PSU, while Config 1 will need something significantly larger,
    >>
    >>> Thanks for your comments. I forgot about the Power. Definitely worth
    >>> considering since we're getting 3 of these servers and UPS sizing
    >>> should also play in the cost equation.
    >>
    >> Power is critical to reliability. If you have a PSU with, say
    >> 50% normal and 70% peak load, that is massively more reliable than
    >> one with 70%/100%. Also many PSUs die on start-up, since e.g.
    >> disks draw their peak currents on spindle start.
    >>
    >>>> also because SATA does not support staggered start-up, while
    >>>> SCSI does. Is that already factored into the cost?
    >>
    >>> This I don't follow, what's staggered start-up ?
    >>
    >> You can jumper most (all?) SCSI drive do delay their spindle-start.
    >> Spindle start results in a massive amount of poerrt drawn for some
    >> seconds. Maybe as much as 2-3 times the peaks you see during operation.
    >>
    >> SCSI drives can be jumperd to spin-up on power-on or on receiving
    >> a start-unit command. Some also support delays. You should be
    >> able to set the SCSI controller to issue the start-unit command
    >> to the drives with, say, 5 seconds delay between each unit or so.
    >> This massively reduces power drawn on start-up.
    >>
    >> SATA drives all (?) do spin-up on power-on. It is a problem
    >> when you have many disks. The PSU needs the reserves to deal
    >> with this worst case.

    > Would you do the world a favor and actually take ten minutes to research
    > your statements before you make them?

    I maked it with a "(?)" as tentative but not sure. Still this is
    a newsgroup and you get what you pay for. I also don't think "the
    world" reads this group.

    > All SATA drives sold as "enterprise"
    > drives have the ability to perform staggered spinup.

    It is not that easy. Depending on the mechanism, you need controller-BIOS
    support or the right type of preconfiguration. Just "supports staggered
    start-up" does not cut it, especially on a new product type.

    Also, just to show the quality of your "research", I happen to have
    found an "enterprise" disk that does not support staggered spin-up in
    about 1 second: Maxtor MaxLine II plus. Staggered spin-up is only
    in MaxLine III. How do I know? Because I own one of these and read the
    documentation! I guess there will be more of them.

    In addition I did not find any specification how the staggered spin-up
    works on a MaxLine III. Does it need controller support? Is it a SATA II
    only feature that does not work with an older controller? Can I jumper
    it? Will the controller support be there? With SCSI I know, because it
    has been a feature for decades.

    Arno
  22. Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

    In comp.sys.ibm.pc.hardware.storage "Rita Ä Berkowitz" <ritaberk2O04 @aol.com> wrote:
    > Arno Wagner wrote:

    >>> You need to catch up with the times. You are correct about the
    >>> original Itaniums being dogs, but I'm talking about the new Itanium2
    >>> processors, which are also 64-bit. As for Intel being expensive,
    >>> you get what you pay for. The new Itanium2 sytems are SWEEEEEEET!
    >>
    >> You recommend a _new_ product for its reliability????
    >> I don't think I need to comment on that.

    > Oh please, come on now! This is like saying BMW introduces a new car this
    > year and it is going to be a failure in the world for using cutting edge
    > technology that hasn't a single shred of old technology behind it. When you
    > lift the hood you still see the same old internal combustion engine that
    > they used for the last 50-years. The difference is they improved
    > manufacturing processes and materials to make the product better. They
    > didn't redesign the wheel for the sake of doing so.

    > Take a new Itanium2 box for a test drive and you'll open your eyes.

    Oh, I agree that it is powerful hardware. But you know, I rather
    have that 10 machine cluster with 10 times the storage that can actually
    do the job than this single, gold-plated big iron.

    >>>> I also know of no indications (except marketing BS by Intel) that
    >>>> Opterons are unreliable.
    >>
    >>> It's being proven in the field daily. You simple don't see Opteron
    >>> based solutions being deployed by major commercial and governmental
    >>> entities.
    >>
    >> Which is a direct result of Intels FUD and behind-the-scenes politics.
    >> In order to prove that something is unreliable it has to be used and
    >> fail. It being not used does not indicate unreliability. It just
    >> indicates "nobody gets fired for buying Intel".

    > Then again, if the box were being used in environments that were life
    > dependant such as on the battlefield, reliability is paramount over cost.
    > Intel has a proven track record for reliability in the field. I would feel
    > safe using an Intel solution over an AMD any day of the week.

    So? From what I hear, getting people killed is preferred to
    spending lots of money on most battlefields. And if you think
    thet CPU reliability is the most important question, then
    you cannot have much experience with software.

    >> So nothing is actually proven about reliability (or lack of)
    >> of Opterons in the field.

    > Market share has a great way of defining reliability.

    Well, that is complete nonsense. Market share does not define any
    technical characteristic. Market share could indicate some technical
    problem, but in this instance it does not. It rather signifies
    "we have allways bought Intel".

    > It would seem that the major players don't feel comfortable betting
    > their livelihood on AMD.

    So? And what does that indicate exactly, besides that they just
    continue to do what they always did, like any large, conservative
    organisation? It does not say anything about the technological
    quality of Opterons.

    >>> True, there are a few *novelty* systems that use many Opteron
    >>> processors, but they are merely a curiosity than the mainstream
    >>> norm. That said, if I wanted a dirt-cheap gaming system I would opt
    >>> for an Opteron based SATA box.
    >>
    >> That is certainly true. As allways the question is to get the
    >> right balance for a specific application. If you have the money
    >> to buy the most expensive solution _and_ the clout to make the
    >> vendor not just rip you off, you certainly will get an andequate
    >> solution. But you will pay too much. Not all of us can afford
    >> to buy stuff the way the military does.

    > Define "pay too much"? Most people and I would rather pay too much
    > upfront instead of being backended with high maintenance and repair
    > costs, not to mention the disastrous outcome of total failure.

    If that were so, there would be hard numbers about this out there.
    Care to give a reference to a technological study that shows
    that AMD is less reliable than Intel to a degree that matters?

    > Like I said, you get what you pay for. If the military would go
    > totally AMD than I would agree with you. Till that day, AMD is not
    > a processor to be taken seriously.

    As somebody with now perhaps ~10 CPU years actual usage on AMD CPUs
    (mostly Athlons) under Linux I cannot agree. I have had troubles, but
    not a single problem because of the CPUs.

    Arno
  23. Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

    Arno Wagner wrote:

    >> Take a new Itanium2 box for a test drive and you'll open your eyes.
    >
    > Oh, I agree that it is powerful hardware. But you know, I rather
    > have that 10 machine cluster with 10 times the storage that can
    > actually
    > do the job than this single, gold-plated big iron.

    Of course you would, but the majority of commercial and military entities
    disagree with you.

    >> Then again, if the box were being used in environments that were life
    >> dependant such as on the battlefield, reliability is paramount over
    >> cost. Intel has a proven track record for reliability in the field.
    >> I would feel safe using an Intel solution over an AMD any day of the
    >> week.
    >
    > So? From what I hear, getting people killed is preferred to
    > spending lots of money on most battlefields. And if you think
    > thet CPU reliability is the most important question, then
    > you cannot have much experience with software.

    Sorry, software is of no concern to me since that is the other person's
    problem. But, then again, there are people whom traditionally blame hardware
    related problems on the software. The anti-Microsoft crowd comes to mind.

    >>> So nothing is actually proven about reliability (or lack of)
    >>> of Opterons in the field.
    >
    >> Market share has a great way of defining reliability.
    >
    > Well, that is complete nonsense. Market share does not define any
    > technical characteristic. Market share could indicate some technical
    > problem, but in this instance it does not. It rather signifies
    > "we have allways bought Intel".

    Or more desirably, "we always sell Intel" because this is what our customers
    that have a clue know what they want.

    >> It would seem that the major players don't feel comfortable betting
    >> their livelihood on AMD.
    >
    > So? And what does that indicate exactly, besides that they just
    > continue to do what they always did, like any large, conservative
    > organisation? It does not say anything about the technological
    > quality of Opterons.

    But it speaks volumes of the people purchasing the hardware. Not many want
    to have egg on their face when the passing fad called the Opteron takes a
    dump.

    >> Define "pay too much"? Most people and I would rather pay too much
    >> upfront instead of being backended with high maintenance and repair
    >> costs, not to mention the disastrous outcome of total failure.
    >
    > If that were so, there would be hard numbers about this out there.
    > Care to give a reference to a technological study that shows
    > that AMD is less reliable than Intel to a degree that matters?

    I only go by what the majority wants and it surely isn't AMD. And most AMD
    zealots wouldn't want to look at the hard numbers if they bit them in the
    ass.

    >> Like I said, you get what you pay for. If the military would go
    >> totally AMD than I would agree with you. Till that day, AMD is not
    >> a processor to be taken seriously.
    >
    > As somebody with now perhaps ~10 CPU years actual usage on AMD CPUs
    > (mostly Athlons) under Linux I cannot agree. I have had troubles, but
    > not a single problem because of the CPUs.

    I guess it boils down to your expectations of what you want from any
    particular CPU. Like I said, if it's gaming and a simple home based MP3
    server for the kiddies than I'll say that AMD is the only choice from a
    sheer economics standpoint.


    Rita
  24. Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

    In comp.sys.ibm.pc.hardware.storage flux <support@fluxsoft.com> wrote:
    > In article <114b3ubcrc6am5e@news.supernews.com>,
    > "Rita Ä Berkowitz" <ritaberk2O04 @aol.com> wrote:

    >> Arno Wagner wrote:
    >>
    >> > Sorry, but that is BS. Itanium is mostly dead technology and not
    >> > really developed anymore. It is also massively over-priced. Xeons are
    >> > sort of not-quite 64 bit CPUs, that have the main characteristic of
    >> > being Intel and expensive.
    >>
    >> You need to catch up with the times. You are correct about the original
    >> Itaniums being dogs, but I'm talking about the new Itanium2 processors,
    >> which are also 64-bit. As for Intel being expensive, you get what you pay
    >> for. The new Itanium2 sytems are SWEEEEEEET!
    >>
    >> > I also know of no indications (except marketing BS by Intel) that
    >> > Opterons are unreliable.
    >>
    >> It's being proven in the field daily. You simple don't see Opteron based
    >> solutions being deployed by major commercial and governmental entities.
    >> True, there are a few *novelty* systems that use many Opteron processors,
    >> but they are merely a curiosity than the mainstream norm. That said, if I
    >> wanted a dirt-cheap gaming system I would opt for an Opteron based SATA box.

    > April Fool's a week early?

    Probably suppressed machine rage. I know I have some. But then what
    do I know, I use AMD CPUs and cheap drives. Probably deserve all
    the problems I have ;-)

    Arno
  25. Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

    Arno Wagner wrote:

    > In comp.sys.ibm.pc.hardware.storage J. Clarke
    > <jclarke.usenet@snet.net.invalid> wrote:
    >> Arno Wagner wrote:
    >
    >>> In comp.sys.ibm.pc.hardware.storage lmanna@gmail.com wrote:
    >>>> Arno Wagner wrote:
    >>>>> In comp.sys.ibm.pc.hardware.storage lmanna@gmail.com wrote:
    >>>
    >>>>>
    >>>>> One thing you can be relatively sure of is that the SCSI controller
    >>>>> will work well with the mainboard. Also Linux has a long history of
    >>>>> supporting SCSI, while SATA support is new and still being worked on.
    >>>>>
    >>>>> For you access scenario, SCSI will also be superior, since SCSI
    >>>>> has supported command queuing for a long time.
    >>>>>
    >>>>> I also would not trust the Raptors as I would trust SCSI drives.
    >>>>> The SCSI manufacturers know that SCSI customers expect high
    >>>>> reliability, while the Raptor is more a poor man's race car.
    >>>
    >>>
    >>>> My main concern is their novelty, rather then their performance. Call
    >>>> it a hunch but it just doesn't feel right to risk it while there's a
    >>>> proven solid SCSI solution for the same price.
    >>>
    >>>>>
    >>>>> One more argument: You can put Config 2 on a 550W (redundant)
    >>>>> PSU, while Config 1 will need something significantly larger,
    >>>
    >>>> Thanks for your comments. I forgot about the Power. Definitely worth
    >>>> considering since we're getting 3 of these servers and UPS sizing
    >>>> should also play in the cost equation.
    >>>
    >>> Power is critical to reliability. If you have a PSU with, say
    >>> 50% normal and 70% peak load, that is massively more reliable than
    >>> one with 70%/100%. Also many PSUs die on start-up, since e.g.
    >>> disks draw their peak currents on spindle start.
    >>>
    >>>>> also because SATA does not support staggered start-up, while
    >>>>> SCSI does. Is that already factored into the cost?
    >>>
    >>>> This I don't follow, what's staggered start-up ?
    >>>
    >>> You can jumper most (all?) SCSI drive do delay their spindle-start.
    >>> Spindle start results in a massive amount of poerrt drawn for some
    >>> seconds. Maybe as much as 2-3 times the peaks you see during operation.
    >>>
    >>> SCSI drives can be jumperd to spin-up on power-on or on receiving
    >>> a start-unit command. Some also support delays. You should be
    >>> able to set the SCSI controller to issue the start-unit command
    >>> to the drives with, say, 5 seconds delay between each unit or so.
    >>> This massively reduces power drawn on start-up.
    >>>
    >>> SATA drives all (?) do spin-up on power-on. It is a problem
    >>> when you have many disks. The PSU needs the reserves to deal
    >>> with this worst case.
    >
    >> Would you do the world a favor and actually take ten minutes to research
    >> your statements before you make them?
    >
    > I maked it with a "(?)" as tentative but not sure. Still this is
    > a newsgroup and you get what you pay for. I also don't think "the
    > world" reads this group.
    >
    >> All SATA drives sold as "enterprise"
    >> drives have the ability to perform staggered spinup.
    >
    > It is not that easy. Depending on the mechanism, you need controller-BIOS
    > support or the right type of preconfiguration.

    The same is true of SCSI. So what?

    > Just "supports staggered
    > start-up" does not cut it, especially on a new product type.
    >
    > Also, just to show the quality of your "research", I happen to have
    > found an "enterprise" disk that does not support staggered spin-up in
    > about 1 second: Maxtor MaxLine II plus. Staggered spin-up is only
    > in MaxLine III. How do I know? Because I own one of these and read the
    > documentation! I guess there will be more of them.

    Take a look at the back of the drive. You should see a jumper block next to
    the SATA connectors. Putting a jumper on the right pins in that block
    enables staggered spin up. It appears that there is documentation that you
    did _not_ read.

    > In addition I did not find any specification how the staggered spin-up
    > works on a MaxLine III. Does it need controller support? Is it a SATA II
    > only feature that does not work with an older controller? Can I jumper
    > it? Will the controller support be there? With SCSI I know, because it
    > has been a feature for decades.

    Look harder. It's on the Maxtor site. It took me about 30 seconds to find
    it, most of which was waiting for pages to load.

    > Arno

    --
    --John
    to email, dial "usenet" and validate
    (was jclarke at eye bee em dot net)
  26. Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

    > As somebody with now perhaps ~10 CPU years actual usage on AMD CPUs
    > (mostly Athlons) under Linux I cannot agree. I have had troubles, but
    > not a single problem because of the CPUs.

    Usually, when people are speaking about "Athlons are worse", this is due to
    worse qualities of _chipsets and mobos_, and not the AMD's CPUs themselves.

    VIA chipsets were traditionally worse then Intel ones - for instance, in terms
    of lame ACPI support.

    --
    Maxim Shatskih, Windows DDK MVP
    StorageCraft Corporation
    maxim@storagecraft.com
    http://www.storagecraft.com
  27. Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

    In comp.sys.ibm.pc.hardware.storage Maxim S. Shatskih <maxim@storagecraft.com> wrote:
    >> As somebody with now perhaps ~10 CPU years actual usage on AMD CPUs
    >> (mostly Athlons) under Linux I cannot agree. I have had troubles, but
    >> not a single problem because of the CPUs.

    > Usually, when people are speaking about "Athlons are worse", this is due to
    > worse qualities of _chipsets and mobos_, and not the AMD's CPUs themselves.

    In the beginning that was certainly true, especially as AMD chipsets
    did not get as much R&D as the Intel ones because of low market share.
    I think it is past.

    > VIA chipsets were traditionally worse then Intel ones - for
    > instance, in terms of lame ACPI support.

    Agreed. I had those problems. In fact I believe it became
    usable only recently. However it is not really needed on
    a server.

    Arno
  28. Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

    On Sat, 26 Mar 2005 06:48:01 -0500, "Rita Ä Berkowitz" <ritaberk2O04
    @aol.com> wrote:

    >lmanna@gmail.com wrote:
    >> Hello all,
    >>
    >> Which of the two following architectures would you choose for a
    >> high-perf NFS server in a cluster env. Most of our data ( 80% ) is
    >> small ( < 64 kb ) files. Reads and Writes are similar and mostly
    >> random in nature:
    >
    >I wouldn't use either one of them since your major flaw would be using an
    >Opteron when you should only be using Xeon or Itanium2 processors. Now, if
    >you are just putting an MP3 server in the basement of your home for
    >light-duty work you can squeak by with the Opterons. As for the drives, I
    >would only use SCSI in the system you mention.

    Rita,

    You've got to be the most predictable poster on usenet. Many of us
    would choke if you ever made different points, or sold Intel & scsi
    based machines without trolling.
  29. Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

    Curious George wrote:

    > You've got to be the most predictable poster on usenet. Many of us
    > would choke if you ever made different points, or sold Intel & scsi
    > based machines without trolling.

    LOL! Is there really anything else worth using besides an Intel based SCSI
    system? Point made!


    Rita
  30. Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

    It turns out Arnie knows little even when the topic is Linux.

    There is a Linux SATA FAQ: http://www.linuxmafia.com/faq/Hardware/sata.html

    There is an interesting claim regarding RAID - none of the SATA1 chips can
    support hotplug. Is this true?

    I see there is an AHCI SATA2 driver, dunno what state it is in. This should
    support the nForce4 as well as ICH6/7. With port multipliers, this would be
    the best way to do software RAID (no extra cost), with bandwidth of 6-12Gb/s
    on four SATA2 ports.

    Note that Longhorn also has AHCI drivers, which may be backported to Win 2K3.

    For a review of SATA RAID cards, see http://www.tweakers.net/reviews/557/1 .

    "J. Clarke" <jclarke.usenet@snet.net.invalid> wrote in message
    news:d257ne01cmi@news2.newsguy.com...
    > Arno Wagner wrote:
    >
  31. Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

    Previously J. Clarke <jclarke.usenet@snet.net.invalid> wrote:
    > Arno Wagner wrote:

    >> In comp.sys.ibm.pc.hardware.storage J. Clarke
    >> <jclarke.usenet@snet.net.invalid> wrote:
    [...]
    >> Just "supports staggered
    >> start-up" does not cut it, especially on a new product type.
    >>
    >> Also, just to show the quality of your "research", I happen to have
    >> found an "enterprise" disk that does not support staggered spin-up in
    >> about 1 second: Maxtor MaxLine II plus. Staggered spin-up is only
    >> in MaxLine III. How do I know? Because I own one of these and read the
    >> documentation! I guess there will be more of them.

    > Take a look at the back of the drive. You should see a jumper block next to
    > the SATA connectors. Putting a jumper on the right pins in that block
    > enables staggered spin up. It appears that there is documentation that you
    > did _not_ read.

    Well, according to the product manual the jumpers on the SATA version
    have no functionality. I am not about to set unmarked jumpers. Also
    these drives, while sold as "enterprise drives" are not SATA II drives.
    Maybe that is what confuses you. Staggered spin-up is optional in SATA I
    drives.

    Anyways, I think there is no point in continuing this.

    Arno
  32. Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

    Arno Wagner wrote:

    > Previously J. Clarke <jclarke.usenet@snet.net.invalid> wrote:
    >> Arno Wagner wrote:
    >
    >>> In comp.sys.ibm.pc.hardware.storage J. Clarke
    >>> <jclarke.usenet@snet.net.invalid> wrote:
    > [...]
    >>> Just "supports staggered
    >>> start-up" does not cut it, especially on a new product type.
    >>>
    >>> Also, just to show the quality of your "research", I happen to have
    >>> found an "enterprise" disk that does not support staggered spin-up in
    >>> about 1 second: Maxtor MaxLine II plus. Staggered spin-up is only
    >>> in MaxLine III. How do I know? Because I own one of these and read the
    >>> documentation! I guess there will be more of them.
    >
    >> Take a look at the back of the drive. You should see a jumper block next
    >> to
    >> the SATA connectors. Putting a jumper on the right pins in that block
    >> enables staggered spin up. It appears that there is documentation that
    >> you did _not_ read.
    >
    > Well, according to the product manual the jumpers on the SATA version
    > have no functionality. I am not about to set unmarked jumpers. Also
    > these drives, while sold as "enterprise drives" are not SATA II drives.
    > Maybe that is what confuses you. Staggered spin-up is optional in SATA I
    > drives.

    What confuses me is that you are saying something different from what Maxtor
    is saying. Go to the Maxtor site and plug "staggered" into their search
    box. I had hoped that I had given you enough of a hint for you to figure
    out on your own that maybe you should do this before digging yourself in
    deeper.

    > Anyways, I think there is no point in continuing this.

    Then don't.

    > Arno

    --
    --John
    to email, dial "usenet" and validate
    (was jclarke at eye bee em dot net)
  33. Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

    Previously J. Clarke <jclarke.usenet@snet.net.invalid> wrote:
    > Arno Wagner wrote:

    >> Previously J. Clarke <jclarke.usenet@snet.net.invalid> wrote:
    >>> Arno Wagner wrote:
    >>
    >>>> In comp.sys.ibm.pc.hardware.storage J. Clarke
    >>>> <jclarke.usenet@snet.net.invalid> wrote:
    >> [...]
    >>>> Just "supports staggered
    >>>> start-up" does not cut it, especially on a new product type.
    >>>>
    >>>> Also, just to show the quality of your "research", I happen to have
    >>>> found an "enterprise" disk that does not support staggered spin-up in
    >>>> about 1 second: Maxtor MaxLine II plus. Staggered spin-up is only
    >>>> in MaxLine III. How do I know? Because I own one of these and read the
    >>>> documentation! I guess there will be more of them.
    >>
    >>> Take a look at the back of the drive. You should see a jumper block next
    >>> to
    >>> the SATA connectors. Putting a jumper on the right pins in that block
    >>> enables staggered spin up. It appears that there is documentation that
    >>> you did _not_ read.
    >>
    >> Well, according to the product manual the jumpers on the SATA version
    >> have no functionality. I am not about to set unmarked jumpers. Also
    >> these drives, while sold as "enterprise drives" are not SATA II drives.
    >> Maybe that is what confuses you. Staggered spin-up is optional in SATA I
    >> drives.

    > What confuses me is that you are saying something different from what Maxtor
    > is saying.

    O.K., I think you are talking about the white-paper on staggered spin-up.

    The problem with that is that the white paper is (1) the only place
    where this behaviour is mentioned (2) the first table column of Table
    3 is inconsistent. Either the pins are for storage of unused jumpers
    or for delayed spin up. Also there are 8 possibilities for "installing
    a jumper" on these seven pins. Figure 2 in the same white-paper gives
    an entirely incorrect drawing of the connector on the drives. This
    drawinf may be correct for MaxLine III or DiamondMax 10, but it is not
    for MaxLine II and DiamondMax 9.

    Sorry, but I think that this white paper just gives wrong information.

    Arno
  34. Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

    "J. Clarke" <jclarke.usenet@snet.net.invalid> wrote in message news:d246am0tgl@news1.newsguy.com
    > Arno Wagner wrote:
    >
    > > In comp.sys.ibm.pc.hardware.storage lmanna@gmail.com wrote:
    > > > Arno Wagner wrote:
    > > > > In comp.sys.ibm.pc.hardware.storage lmanna@gmail.com wrote:
    > >
    > > > >
    > > > > One thing you can be relatively sure of is that the SCSI controller
    > > > > will work well with the mainboard. Also Linux has a long history of
    > > > > supporting SCSI, while SATA support is new and still being worked on.
    > > > >
    > > > > For you access scenario, SCSI will also be superior, since SCSI
    > > > > has supported command queuing for a long time.
    > > > >
    > > > > I also would not trust the Raptors as I would trust SCSI drives.
    > > > > The SCSI manufacturers know that SCSI customers expect high
    > > > > reliability, while the Raptor is more a poor man's race car.
    > >
    > >
    > > > My main concern is their novelty, rather then their performance. Call
    > > > it a hunch but it just doesn't feel right to risk it while there's a
    > > > proven solid SCSI solution for the same price.
    > >
    > > > >
    > > > > One more argument: You can put Config 2 on a 550W (redundant)
    > > > > PSU, while Config 1 will need something significantly larger,
    > >
    > > > Thanks for your comments. I forgot about the Power. Definitely worth
    > > > considering since we're getting 3 of these servers and UPS sizing
    > > > should also play in the cost equation.
    > >
    > > Power is critical to reliability. If you have a PSU with, say
    > > 50% normal and 70% peak load, that is massively more reliable than
    > > one with 70%/100%. Also many PSUs die on start-up, since e.g.
    > > disks draw their peak currents on spindle start.
    > >
    > > > > also because SATA does not support staggered start-up, while
    > > > > SCSI does. Is that already factored into the cost?
    > >
    > > > This I don't follow, what's staggered start-up ?
    > >
    > > You can jumper most (all?) SCSI drive do delay their spindle-start.
    > > Spindle start results in a massive amount of poerrt drawn for some
    > > seconds. Maybe as much as 2-3 times the peaks you see during operation.
    > >
    > > SCSI drives can be jumperd to spin-up on power-on or on receiving
    > > a start-unit command. Some also support delays. You should be
    > > able to set the SCSI controller to issue the start-unit command
    > > to the drives with, say, 5 seconds delay between each unit or so.
    > > This massively reduces power drawn on start-up.
    > >

    > > SATA drives all (?)

    Not Hitachi, not Seagate, not Maxtor, that are SATAII compatible.

    > > do spin-up on power-on. It is a problem when you have many disks.
    > > The PSU needs the reserves to deal with this worst case.

    The PSU needs to deal with that anyway when all drives are seeking randomly.
    Seagates appear to spinup agressively though.

    >
    > Would you do the world a favor

    Babblemouth? Never!

    > and actually take ten minutes to research
    > your statements before you make them?

    He'll die first.

    > All SATA drives sold as "enterprise"
    > drives have the ability to perform staggered spinup.

    So do all Hitachis and Seagate Barras 7 and 8 and Maxtor Diamondmax+9, +10

    >
    > > Arno
  35. Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

    Arno Wagner wrote:

    > Previously J. Clarke <jclarke.usenet@snet.net.invalid> wrote:
    >> Arno Wagner wrote:
    >
    >>> Previously J. Clarke <jclarke.usenet@snet.net.invalid> wrote:
    >>>> Arno Wagner wrote:
    >>>
    >>>>> In comp.sys.ibm.pc.hardware.storage J. Clarke
    >>>>> <jclarke.usenet@snet.net.invalid> wrote:
    >>> [...]
    >>>>> Just "supports staggered
    >>>>> start-up" does not cut it, especially on a new product type.
    >>>>>
    >>>>> Also, just to show the quality of your "research", I happen to have
    >>>>> found an "enterprise" disk that does not support staggered spin-up in
    >>>>> about 1 second: Maxtor MaxLine II plus. Staggered spin-up is only
    >>>>> in MaxLine III. How do I know? Because I own one of these and read the
    >>>>> documentation! I guess there will be more of them.
    >>>
    >>>> Take a look at the back of the drive. You should see a jumper block
    >>>> next to
    >>>> the SATA connectors. Putting a jumper on the right pins in that block
    >>>> enables staggered spin up. It appears that there is documentation that
    >>>> you did _not_ read.
    >>>
    >>> Well, according to the product manual the jumpers on the SATA version
    >>> have no functionality. I am not about to set unmarked jumpers. Also
    >>> these drives, while sold as "enterprise drives" are not SATA II drives.
    >>> Maybe that is what confuses you. Staggered spin-up is optional in SATA I
    >>> drives.
    >
    >> What confuses me is that you are saying something different from what
    >> Maxtor is saying.
    >
    > O.K., I think you are talking about the white-paper on staggered spin-up.
    >
    > The problem with that is that the white paper is (1) the only place
    > where this behaviour is mentioned (2) the first table column of Table
    > 3 is inconsistent. Either the pins are for storage of unused jumpers
    > or for delayed spin up.

    Perhaps it has not occurred to you that among the 9, not 7, jumpers there is
    one position that is "dead" and provided for jumper storage, a not uncommon
    configuration, and another that enables staggered spinup. Why would they
    provide a rack for unused jumpers if the drive does not have any use for
    jumpers?

    > Also there are 8 possibilities for "installing
    > a jumper" on these seven pins.

    Actually, I believe there are 12.

    > Figure 2 in the same white-paper gives
    > an entirely incorrect drawing of the connector on the drives. This
    > drawinf may be correct for MaxLine III or DiamondMax 10, but it is not
    > for MaxLine II and DiamondMax 9.
    >
    > Sorry, but I think that this white paper just gives wrong information.

    When you know for sure get back to us. You might also want to tell Maxtor
    about it.

    But at this point even if you are correct about that specific drive you're
    arguing minutiae.

    Personally if I had a Maxline II Plus that I wasn't using I'd try the damned
    jumpers and see if any position gave the desired effect.

    > Arno

    --
    --John
    to email, dial "usenet" and validate
    (was jclarke at eye bee em dot net)
  36. Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

    On Sun, 27 Mar 2005 06:48:05 -0500, "Rita Ä Berkowitz" <ritaberk2O04
    @aol.com> wrote:

    >Curious George wrote:
    >
    >> You've got to be the most predictable poster on usenet. Many of us
    >> would choke if you ever made different points, or sold Intel & scsi
    >> based machines without trolling.
    >
    >LOL! Is there really anything else worth using besides an Intel based SCSI
    >system? Point made!


    For x86 "Servers" & "Workstations" I'm also a Supermicro slut and a
    Seagate SCSI bigot. These are safe bets for a stable, reliable
    platform; their quality & consistency makes integration easy & yields
    good value. Believe it or not, though, there are other worthwhile
    things & an inflexible one size fits all approach is inherently
    flawed. But that's not my point. Your answer proved it nonetheless.
  37. Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

    Curious George wrote:

    > For x86 "Servers" & "Workstations" I'm also a Supermicro slut and a
    > Seagate SCSI bigot. These are safe bets for a stable, reliable
    > platform; their quality & consistency makes integration easy & yields
    > good value. Believe it or not, though, there are other worthwhile
    > things & an inflexible one size fits all approach is inherently
    > flawed. But that's not my point. Your answer proved it nonetheless.

    I see you learned my tastes? Yes, I realize there are other "worthwhile"
    solutions out there. That's not the issue since I put all options on the
    table for the customer. If I don't have something to fit their needs I
    refer them to people that do. I see no logic in pissing around with
    hardware that has no benefit for my customers or myself.


    Rita
  38. Archived from groups: comp.sys.ibm.pc.hardware.storage,comp.arch.storage (More info?)

    Rita Ä Berkowitz wrote:

    >Arno Wagner wrote:
    >
    >> Sorry, but that is BS. Itanium is mostly dead technology and not
    >> really developed anymore. It is also massively over-priced. Xeons are
    >> sort of not-quite 64 bit CPUs, that have the main characteristic of
    >> being Intel and expensive.
    >
    >You need to catch up with the times. You are correct about the original
    >Itaniums being dogs, but I'm talking about the new Itanium2 processors,
    >which are also 64-bit. As for Intel being expensive, you get what you pay
    >for. The new Itanium2 sytems are SWEEEEEEET!

    Ignore the troll, too stupid to understand what Arno meant by "dead
    technology".
  39. Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

    Previously J. Clarke <jclarke.usenet@snet.net.invalid> wrote:
    > lmanna@gmail.com wrote:

    >>
    >> J. Clarke wrote:
    >>> Arno Wagner wrote:
    [...]

    >>> If he's proposing to soft-RAID using a 3ware host adapter then
    >>> he's a damned fool.
    >> Well my colleague is ;) His argument is that by using a dual
    >> opteron fully dedicated to File Serving is more High-perf than a
    >> hardware raid, even though we spend $ 1400 on the 3ware cards (
    >> 2x12 oprts SATA ).
    > He's fighting the system that way, not working with it. If hardware RAID
    > was so lousy then IBM wouldn't be putting in their midrange and high end
    > servers.

    >> I think we can achieve the same perf and more reliability with just
    >> 12 SCSI drives using the onboard SCSI with the Linux SCSI
    >> abstraction layer and software RAID. What do you guys think ?

    > I think that if he's determined to do soft RAID the outcome is going to be a
    > good deal more satisfactory if he's not fighting a device that wasn't
    > designed to support that mode of operation.

    I fully agree to that.

    [...]
    > If it's a given that software RAID will be used then I'd go for the SCSI
    > solution simply because there's not a "dumb" SATA RAID controller with
    > enough ports to do what you want to do.

    That could be problematic, yes. I think the largest you get is
    8 ports (Promise), but that requires PCI-X. If you use 4 port PCI
    controllers you would have to use 6 for 24 disks. I would not
    trust such a configuration, if it is possible at all.

    Arno
  40. Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

    Arno Wagner wrote:

    > Previously J. Clarke <jclarke.usenet@snet.net.invalid> wrote:
    >> lmanna@gmail.com wrote:
    >
    >>>
    >>> J. Clarke wrote:
    >>>> Arno Wagner wrote:
    > [...]
    >
    >>>> If he's proposing to soft-RAID using a 3ware host adapter then
    >>>> he's a damned fool.
    >>> Well my colleague is ;) His argument is that by using a dual
    >>> opteron fully dedicated to File Serving is more High-perf than a
    >>> hardware raid, even though we spend $ 1400 on the 3ware cards (
    >>> 2x12 oprts SATA ).
    >> He's fighting the system that way, not working with it. If hardware RAID
    >> was so lousy then IBM wouldn't be putting in their midrange and high end
    >> servers.
    >
    >>> I think we can achieve the same perf and more reliability with just
    >>> 12 SCSI drives using the onboard SCSI with the Linux SCSI
    >>> abstraction layer and software RAID. What do you guys think ?
    >
    >> I think that if he's determined to do soft RAID the outcome is going to
    >> be a good deal more satisfactory if he's not fighting a device that
    >> wasn't designed to support that mode of operation.
    >
    > I fully agree to that.
    >
    > [...]
    >> If it's a given that software RAID will be used then I'd go for the SCSI
    >> solution simply because there's not a "dumb" SATA RAID controller with
    >> enough ports to do what you want to do.
    >
    > That could be problematic, yes. I think the largest you get is
    > 8 ports (Promise), but that requires PCI-X.

    Interesting. I had not seen that one. But is it truly dumb? If we're
    thinking of the same model it belongs to the SX series which normally has
    at least some onboard intelligence.

    As for PCI-X, that's actually another good point. With 12 drives there may
    be times that the PCI bus becomes a bottleneck.

    > controllers you would have to use 6 for 24 disks. I would not
    > trust such a configuration, if it is possible at all.
    >
    > Arno

    --
    --John
    to email, dial "usenet" and validate
    (was jclarke at eye bee em dot net)
  41. Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

    Previously J. Clarke <jclarke.usenet@snet.net.invalid> wrote:
    > Arno Wagner wrote:

    >> Previously J. Clarke <jclarke.usenet@snet.net.invalid> wrote:
    >>> lmanna@gmail.com wrote:
    >>
    >>>>
    >>>> J. Clarke wrote:
    >>>>> Arno Wagner wrote:
    >> [...]
    >>
    >>>>> If he's proposing to soft-RAID using a 3ware host adapter then
    >>>>> he's a damned fool.
    >>>> Well my colleague is ;) His argument is that by using a dual
    >>>> opteron fully dedicated to File Serving is more High-perf than a
    >>>> hardware raid, even though we spend $ 1400 on the 3ware cards (
    >>>> 2x12 oprts SATA ).
    >>> He's fighting the system that way, not working with it. If hardware RAID
    >>> was so lousy then IBM wouldn't be putting in their midrange and high end
    >>> servers.
    >>
    >>>> I think we can achieve the same perf and more reliability with just
    >>>> 12 SCSI drives using the onboard SCSI with the Linux SCSI
    >>>> abstraction layer and software RAID. What do you guys think ?
    >>
    >>> I think that if he's determined to do soft RAID the outcome is going to
    >>> be a good deal more satisfactory if he's not fighting a device that
    >>> wasn't designed to support that mode of operation.
    >>
    >> I fully agree to that.
    >>
    >> [...]
    >>> If it's a given that software RAID will be used then I'd go for the SCSI
    >>> solution simply because there's not a "dumb" SATA RAID controller with
    >>> enough ports to do what you want to do.
    >>
    >> That could be problematic, yes. I think the largest you get is
    >> 8 ports (Promise), but that requires PCI-X.

    > Interesting. I had not seen that one. But is it truly dumb? If we're
    > thinking of the same model it belongs to the SX series which normally has
    > at least some onboard intelligence.

    Yes, I am thinking of the SX8. It is one of these RAID-accelerators
    where there is an XOR engine and the like in the card, but it cannot
    do the full job without assistance from the CPU. It is somewhere halfway
    between a dumb controller and true hardware RAID, also in price.

    > As for PCI-X, that's actually another good point. With 12 drives there may
    > be times that the PCI bus becomes a bottleneck.

    Definitely.

    Arno
  42. Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

    "Arno Wagner" <me@privacy.net> wrote in message news:3au4ctF6bvqfbU1@individual.net
    > Previously J. Clarke <jclarke.usenet@snet.net.invalid> wrote:
    > > Arno Wagner wrote:
    >
    > > > Previously J. Clarke <jclarke.usenet@snet.net.invalid> wrote:
    > > > > lmanna@gmail.com wrote:
    > > >
    > > > > >
    > > > > > J. Clarke wrote:
    > > > > > > Arno Wagner wrote:
    > > > [...]
    > > >
    > > > > > > If he's proposing to soft-RAID using a 3ware host adapter then
    > > > > > > he's a damned fool.
    > > > > > Well my colleague is ;) His argument is that by using a dual
    > > > > > opteron fully dedicated to File Serving is more High-perf than a
    > > > > > hardware raid, even though we spend $ 1400 on the 3ware cards (
    > > > > > 2x12 oprts SATA ).
    > > > > He's fighting the system that way, not working with it. If hardware RAID
    > > > > was so lousy then IBM wouldn't be putting in their midrange and high end
    > > > > servers.
    > > >
    > > > > > I think we can achieve the same perf and more reliability with just
    > > > > > 12 SCSI drives using the onboard SCSI with the Linux SCSI
    > > > > > abstraction layer and software RAID. What do you guys think ?
    > > >
    > > > > I think that if he's determined to do soft RAID the outcome is going to
    > > > > be a good deal more satisfactory if he's not fighting a device that
    > > > > wasn't designed to support that mode of operation.
    > > >
    > > > I fully agree to that.
    > > >
    > > > [...]
    > > > > If it's a given that software RAID will be used then I'd go for the SCSI
    > > > > solution simply because there's not a "dumb" SATA RAID controller with
    > > > > enough ports to do what you want to do.
    > > >
    > > > That could be problematic, yes. I think the largest you get is
    > > > 8 ports (Promise), but that requires PCI-X.
    >
    > > Interesting. I had not seen that one. But is it truly dumb? If we're
    > > thinking of the same model it belongs to the SX series which normally has
    > > at least some onboard intelligence.
    >
    > Yes, I am thinking of the SX8. It is one of these RAID-accelerators
    > where there is an XOR engine and the like in the card, but it cannot
    > do the full job without assistance from the CPU. It is somewhere halfway
    > between a dumb controller and true hardware RAID, also in price.
    >
    > > As for PCI-X, that's actually another good point.

    > With 12 drives there may be times that the PCI bus becomes a bottleneck.

    If it's not the SCSI bus first.
    And he will be using server boards so PCI is not a problem.

    >
    > Definitely.

    With 64 kB files read randomly, spread over 6 strips or even 12?
    Yeah, right.

    Maybe the queue mechanism can combine a few into sequential reads but
    even then you are still looking at very small IO with huge latency overhead.
    (64 kB transfers in 1 ms vs a 6 ms access time and that's for a single drive)
    So you are using 1/7th the available bandwidth of the resultant drive.

    I think even standard PCI will cope.
    And 120 MB/s is still stinking fast for the odd big file.

    >
    > Arno
  43. Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

    "J. Clarke" <jclarke.usenet@snet.net.invalid> wrote in message news:d2ab8n0pk5@news2.newsguy.com
    > lmanna@gmail.com wrote:
    > > J. Clarke wrote:
    > > > Arno Wagner wrote:
    > > > > Previously J. Clarke <jclarke.usenet@snet.net.invalid> wrote:
    > > > > > Arno Wagner wrote:
    > > > > > > In comp.sys.ibm.pc.hardware.storage J. Clarke <jclarke.usenet@snet.net.invalid> wrote:
    > > > > > > > Arno Wagner wrote:
    > > > > > >
    > > > > > > > > In comp.sys.ibm.pc.hardware.storage lmanna@gmail.com wrote:
    > > > > > > > > > Hello all,
    > > > > > > [...]

    [onehelluvabigsnip]

    > > > If he's proposing to soft-RAID using a 3ware host adapter then he's a
    > > > damned fool.
    > >
    > >
    > > Well my colleague is ;) His argument is that by using a dual opteron
    > > fully dedicated to File Serving is more High-perf than a hardware raid,
    > > even though we spend $ 1400 on the 3ware cards ( 2x12 oprts SATA ).
    >
    > He's fighting the system that way, not working with it. If hardware RAID
    > was so lousy then IBM wouldn't be putting in their midrange and high end
    > servers.
    >
    > > I think we can achieve the same perf and more reliability with just 12
    > > SCSI drives using the onboard SCSI with the Linux SCSI abstraction
    > > layer and software RAID. What do you guys think ?
    >
    > I think that if he's determined to do soft RAID the outcome is going to
    > be a good deal more satisfactory if he's not fighting a device that wasn't
    > designed to support that mode of operation.
    >
    > But he should be asked to show in detail, with test results and
    > caclulations, why his proposed solution is superior to using the
    > 3ware board in the manner in which it was designed to be used.
    >
    > If it's a given that software RAID will be used then I'd go for the SCSI
    > solution simply because there's not a "dumb" SATA RAID controller with
    > enough ports to do what you want to do.

    That also applies to SCSI.
    In RAID you are not supposed to be using more than four drives per channel
    so with 12 drives you are looking at 3 channels.

    But since this is small record IO you may be able to sqeeuze them onto a
    2-channel card though.

    >
    > > > > Seems to me we have a misunderstanding here. If the OP
    > > > > wanted to do Hardware-RAID the assessment would look different.
    > > >
    > > > He's the one who stated that he was going to use the 3ware host adapter.
    > > > If he doesn't want to make use of its abilities then he should save his money.
    > > > >
    > > > > Arno
    > > >
    > > > --
    > > > --John
Ask a new question

Read More

SATA Storage