59 answers Last reply
More about thinking idea terminalogy
  1. First - Please turn off the capslock. It looks like you're shouting at us. And its harder to read. (to your benefit however, your grammer is excellent - you can use punctuation, sentences in good structure, etc., unlike some who get upset because we can't understand their version of english... :lol: :lol: :lol: )

    You also need a SCSI controller card to install your SCSI drive. (No SCSI ports on your mobo are there?)

    Since you intend to boot to the SCSI drive, make sure the controller card is a bootable one, and have the driver diskette (drivers should come with the card, though you may need to copy them from the CD onto a diskette) ready for XP installation, and press F6 when prompted.

  2. Thank you. I did however miss the period for the E after it in I.D.E., anyways it always helps beeing nice when you want to learn about something.

    I already knew my motherboard dosen't have any connectors for S.C.S.I. so I feared I would have to get a card. I was looking at and they have hundreds of them that are all in different price ranges with all different specifications and I really don't know where to start researching which is which. There's three kinds: P.C.I. Express, P.C.I. X and P.C.I. I have 2 free P.C.I. X slots free and one P.C.I. slot free. I would want one that is the fastest and with out any of those useless premium features.

    Thank you.
  3. Some boards come with SCSI adapters built in and I was too lazy google your board, that's why I asked. :)

    The Gigabyte board you mentioned in your first post has no PCI-X, just PCI and PCI-E (Express). If you can get a PCI-E SCSI card it will likely be fastest, but PCI will probably be OK too. I'm not really read up on scsi controllers, so I think I'm in over my head recommending one. Hopefully someone else will have some info.

  4. I would suggest looking at adaptec or LSI logic.

    NOTE: PCI-X and PCI-E aka PCI-Express are not compatible, they are completely different. PCI and PCI-X are parallel PCI interfaces. PCI-E/PCI-Express is a serial PCI interface.

    IIRC Only LSI logic offers a PCI - Express SCSI controller so far

    Since your board only has 32bit PCI slots and PCI Express slots you have to buy either
    a 32bit PCI, 64bit PCI-X that is backward compatible with 32bit PCI OR a PCI Express card. The problem is 32bit PCI tops out at 32bits x 33 MHz = 132MB/sec MAX.

    PCI-Express is a lot faster in THEORY however in many cases the cards do not actually run as fast as the interface can go!

    Many 64bit PCI-X boards will work in 32bit PCI slots however your performance will be degraded because they only run at 32bits x 33MHz in 32bit slots. In 64bit slots PCI-X can run at 64bits x 33MHZ, 66MHz, 100MHz or 133MHz offering much better interface performance however your board does NOT have PCI-X. PCI-X slots are most commonly found in high-end server motherboards.

    Newegg has the LSI Logic LSI00008 PCI Express SCSI RAID storage adapter - Retail boxed
    for $655 which is a lot of money.

    THEORETICALLY this controller can use PCI-Express 8x up to 4GB/sec peak bandwidth in full duplex mode or 2GB/sec in simplex mode. Which is REALLY fast. Of course a 15,000 RPM U320 SCSI cannot go that fast however several U320 SCSI drives together can get closer to the theoretical max.

    Plugging one U320 SCSI drive into a dedicated U320 SCSI channel will offer maximum performance.

    to quote newegg:
    "PCI Express 2.5 GHz x8 link for 2 GB/sec peak simplex or 4 GB/sec peak dual simplex bandwidth"

    I strongly recommend an U320 SCSI controller with one or more 15,000 RPM U320 SCSI hard drives

    I'm sure you can find a good deal on a U160 or U320 SCSI adaptec or LSI board on ebay.

    If you had a Tyan S2882G3NR-D board []
    you could use 2 64bit 133MHz PCI-X U320 SCSI controller cards or 64bit 133MHz PCI-X SATA cards running at 64bits x 133MHz each to achieve maximum performance. This is possible on this board and boards like it because they have 2 independent PCI-X buses each capable of running at full speed. Only special server boards offer this feature, most boards only have 1 shared PCI bus.
  5. Quote:



    What is your reasoning for going to SCSI? Nowadays, unless you need 24/7 uptime with constant disk access, SATA would be a much better option for you. You can get 2 SATA WD Raptor drives in a RAID 0 config that are nearly as fast as SCSI and are far cheaper. Two Raptors are about $210, and you already have the RAID controller on your board, while Two SCSIs and a contoller your looking at about $475. You could also do SATA-II, but that would be more expensive. You may want to condier that route before going SCSI. It's easier, cheaper, and there are more people on forums like this that can help you.

    Thats just my $0.02
  6. PS Because of the special design and features of SCSI even a U160 SCSI controller with a 15,000 RPM U160/U320 drive will be much faster and more responsive than a PATA IDE or SATA IDE drive. Even U80 SCSI drives perform many operations faster than their IDE counterparts because they are designed and engineered for server use whereas IDE whether PATA or SATA is designed to be CHEAP. SCSI has had TCQ (tagged command queuing) and other performance optimizing technology for MANY MANY years, SATA just recently introduced it in some select drives and controllers. SCSI has always been the better interface hands down unfortunately it is more expensive [ unless you raid ebay ;-) ].
  7. PPS I have heard/read of people that have bought a number of older SCSI controllers and hard drives on ebay cheaply and put huge RAID arrays together offering great performance at a greatly reduced price. The only downside is you're using more electricity and you need better cooling and ear protection / noise insulation.

    SCSI's TCQ is fantastic! the number of IO operations / sec has always been great.

    SCSI also supports up to 7 or 15 drives per channel allowing for huge arrays to be assembled.

    Multi-channel SCSI controllers offer even better performance.

    SCSI CRUSHES IDE especially on Linux and BSD
  8. when it comes time to pick a scsi drive - go here:
    Read over what they have to say about the top performing drives & pick 1.
    I have a maxtor atlas 10k V (73gb version). its somewhat near the top- best i could find for capacity vs performance. - The 300gb version sells for around $1200 USD.

    Chances are also you will NEED some dedicated cooling for your drives. They run a lot hotter than IDE, and you will also want to protect your investment :)

    If you just want to upgrade your existing system - follow the advice of other posters here on what cards to get (LSI or Adaptec).
    If you want to get a new scsi system - I HIGHLY reccomend getting a board with a built in scsi controller. The board will be more than regular IDE boards - but you wont be shelling out $100-$300 for a controller card either.

    Since you dont want raid - you will do fine with a single or dual channel card. Dual channel just means there are plugs for 2 cables. And each cable will support up to 15 phsical drives....

    and welcome to the world of high performance drives 8)
  9. OK, my 2cents...
    You will get better performance per dollar from 2 raptors in raid0 on your built in sata controller than you could any reasonably priced SCSI solution.

    However a 15k rpm scsi drive is pretty cool too....
    How do you feel about refurbished stuff?

    Seagate ST336753LW 15k rpm for $125

    Adaptec SCSI Card 29160 $89

    LSILogic LSIU320 SCSI Card $110
  10. Quote:
    SCSI CRUSHES IDE especially on Linux and BSD
  11. SATA is ok price/performance wise. However SCSI makes a lot of sense in a server or high performance workstation or even high end gaming PC, especially for the boot drive.

    I encourage people to use a 15,000 RPM high-end U320 SCSI boot drive and SATA HDDs for raw storage. The boot drive does not have to be very big, 36, 73 or 147GB will work just fine.

    For higher capacities (200GB+) SATA makes sense.

    In the end I believe this hybrid approach offers excellent performance and reliability (SCSI) AND a lot of inexpensive storage (SATA).
  12. hehe, edited out my reply to you, sorry man, but I understand what you mean.
    However, in this particular case, I cannot see that SCSI has any advantages whatsoever. All due to the limitations of the PCI bus that this guy will run into. Yes, he has PCI-Express, but the cards seem to be way too expensive.

    For about $320, 2 10krpm raptors in raid 0 will outperform one SCSI 15krpm drive. To get that type of performance out of scsi in this instance is cost prohibitive.

    As far as reliability goes, I see SCSI drives pop continuously, it seems to be all about the drive itself... SCSI/SATA doesn't seem to make a difference IMO.
  13. One other thing you might not have thought about is that 15k RPM drives are, on average, a hella lot more noisy than other, slower, drives.
  14. I agree that the PCI bus will be the bottleneck however a good SCSI controller with a nice 15,000 RPM U320 drive can make up for those limitations.

    Granted it is not possible to exceed 1056Gbits/sec or 132MB/sec which is the max on good old parallel PCI, however SCSI still has MUCH better head movement optimization (true hardware TCQ), significantly better caching algorithms and does not have the drive size and addressing limitations IDE has or a 1024 cylinder limit which can still cause problems.

    Also some operating systems (windows, etc) have problems partitioning or booting from SATA drives (linux and BSD work just fine) but SCSI makes a better boot device hands down regardless of operating system.

    For the record, having to feed windows a 3.5" floppy to get it to work with some SATA or even SCSI controllers is a HUGE pain in the you know what!!!! (again Linux and BSD work FINE).

    SCSI also tends to be more reliable and drives have longer warranties.

    I have had SCSI drives last over 15 years, naturally you shouldn't be using ANY drive (SCSI or IDE) for critical storage needs after that many years of operation, but it's good to know well engineered and well manufactured drives tend to be more reliable overall.
  15. The Raptor (third gen) has TCQ as well.
  16. I have over 10 year old ide hard drives, and two, one about 5 years the other about 7 are still running an email internet PC
  17. Heck, my step-mother still prefers and uses an original 386 Windows machine for card games. It is still going, and she uses it just about every day! BTW, she also has another PC, but only uses it for email, the 386 is in a corner in the kitchen.
  18. Theory is all well and good.... However, unless this guy REALLY wants SCSI, I cannot in good conscience recommend it in this particular case.

    Yes 132MB/s is the theoretical limit of the PCI bus..BUT did you miss it? This guy has ONE pci slot free... That other PCI slot is in use for something... Hmmm, what if it is a sound card? Ever have a "hard drive buzz" come thru your sound card from a pci raid card? I have.
    Plus, how much is that other card going to cut into the hard drive performance? How much of the 132MB/s bandwidth is it going to eat?

    At this point, unless this guy has money to burn on a PCI-Express SCSI card, I would recommend, for price and performance, SATA.

    There is much more to performance than just speed.
    A SCSI drive that "buzzes" the sound card on every access is not good performance. A gigabit ethernet card that robs hard drive bandwidth on every big network file transfer is not good performance.
  19. Let me clarify, I have had SCSI drives last over 15 years in non-mission critical but very busy web servers, now obviously those drives were built very well and I got very lucky.

    I have had very expensive IDE, SCSI and MFM drives arrive DOA or die on me shortly after installation - that happens ALL the time.

    What I am trying to say is, in general SCSI drives are designed and engineered for high availability, high duty cycle environments under heavy load. IDE drives tend to be just cheap.

    ALL types of drives suffer from "DEATHSTAR" syndrome and every manufacturer has released drives that were defective, crashed-a-lot and sucked overall. In my experience however higher-end SCSI drives tend to beat the crap out of ALL other drives (SCSI and IDE). In some cases you DO get what you pay for!

    A few months ago I received a large batch of pretty expensive Seagate Barracuda 7200.8 SATA drives and HALF (50%) of them were either DOA or died within 2-3 months.

    Very similar high-end SATA drives from Hitachi (formerly IBM) died on me too but the failure percentage was a lot lower.

    Obviously there was something seriously wrong with that batch! Seagate is known to be one of the better HDD mfg's and they have made some of the best SCSI drives ever.
  20. A relatively inexpensive U160 or U320 PCI SCSI controller from ebay with a high-end U160 or U320 SCSI drive (preferably NOT from ebay) will still outperform PATA IDE or SATA IDE even if the interface speed is limited to 132MB/sec due to the PCI limitation.

    When trying to multi-task IDE just SUCKS, on my linux box when updatedb runs my machine slows to a crawl and on windows when my anti-virus program runs it destroys my performance in anything else I'm running. This is particularly true in PATA IDE, less so in SATA but the problem is still there.

    With SCSI when updatedb or something uses the drive intensively you can't even tell that the drive is doing anything else. This is true even under windows.

    When opening a huge mailbox that is several GB under Linux it takes a few seconds with SCSI while it takes 10min with IDE - I have seen the exact same thing in proprietary Unix machines too that support both SCSI and IDE.
  21. Quote:
    A relatively inexpensive U160 or U320 PCI SCSI controller from ebay with a high-end U160 or U320 SCSI drive (preferably NOT from ebay) will still outperform PATA IDE or SATA IDE even if the interface speed is limited to 132MB/sec due to the PCI limitation.

    So, you are saying that a 15k scsi drive on a pci bus would outperform 2 raptors in raid 0 on a SATA link directly into the ICH6 on this particular motherboard?

    LOL, yeah, maybe if you gimped the SATA drives by switching the transfer mode to PIO.

    Your statements seem to me to be suspiciously pre UDMA.
  22. 1xFujitsu MAU3036NP 36.7GB 15,000 RPM SCSI Ultra320 68pin Hard Drive - OEM $180

    1xLSI 22320 64-bit PCI-X u320 SCSI dual channel ultra320 $130

    Total $310 USD DELIVERED


    2xWD Raptor WD740GD 74GB 10,000 RPM Serial ATA150 Hard Drive - OEM $320

    Total $320 USD DELIVERED


    2xWD Raptor WD360GD 36.7GB 10,000 RPM Serial ATA150 Hard Drive - OEM $217

    Total $217 USD DELIVERED

    In THEORY 2 raptors in RAID 0 will be faster than 1 15,000 U320 SCSI drive.

    However RAID 0 is SUICIDE MODE because it breaks too easily.

    As far as the controller goes the LSI Logic controller is WAY better than ANY SATA controller with the exception of the 3Ware Escalade series which is roughly comparable.

    Keep in mind most SATA chipsets are seriously CHEAPO, even Intel's, while an Adaptec, LSI Logic or other high-end SCSI controller usually has true hardware acceleration and some of the nicer ones have onboard RAM whereas most SATA controllers do not.

    I have personally tested SATA drives with very expensive 3Ware Escalade series controllers ($600 each!) and while I was VERY impressed with the interface to interface performance, sustained performance was not very impressive and multi-tasking performance was not very good either.

    I would recommend:

    1xFujitsu MAU3036NP 36.7GB 15,000 RPM SCSI Ultra320 68pin Hard Drive - OEM from newegg


    1xLSI 22320 64-bit PCI-X u320 SCSI dual channel ultra320 from ebay

    for booting

    and one or more reasonably priced SATA drive ( s ) for storage plugged into the onboard SATA controller. I would NOT recommend RAID 0 !!!

    This article is from 2003 however the maildir comparison is still valid today!

    IDE - Western Digital 40GB SCSI - Quantum Atlas V 9GB
    Speed 7,200 RPM 7,200 RPM
    Buffer 2MB 4MB
    Avg read seek time 8.9ms 6.3ms
    Buffer cache reads (hdparm -T) 375MB/sec 340MB/sec
    Buffered disk reads (hdparm -t) 45MB/sec 29MB/sec
    Time to read maildir 7 minutes 1 minute, 10 seconds

    IDE drive New SCSI drive
    7 minutes 28 seconds

    I have seen even better tests and I will post some more info for you as soon as I find it.
  24. Be interesting to see how much that gap has closed using todays SATA 2.5 controllers, I suspect quiet a bit...
  25. Quote:

    Comparing a 10000 rpm scsi to a 7200 rpm ide. hehe

    Even the transfer times seem suspect to me. Did the guy forget to turn on DMA?

    There really wasn't any info in that article that is useful.. How was it set up? What mobos were used? What OS's? Seems to imply linux of some flavor. What drivers were used?

    Please send me another test.
  26. That was about the lamest reveiw to use...
  27. ok you asked for it!

  28. Quote:
    ok you asked for it!


    That information is basically useless, as it doesn't tell us anything about the systems you are comparing. If we saw that this info came from benchmarks with ALL other factors being equal (hardware, Drivers (non HDD), you know the deal), we could make a real comparison. however, those graphs pretty much only tell us the one disk type can beat another in a certain situation. The graphs aren't even labeled as to what you are comparing.
  29. Cool graphs. Did you make these on your system? If so, can your describe what each quadrant represents? Also, your setup, OS, drivers, etc?

    Is it safe to assume that the left is IDE and the right is SCSI?
    Thanks :)
  30. lol :-)

    Those graphs were not what you thought they were ;-)

    That's because I didn't get around to explaining what they represent.

    Those tests were executed on the exact same hardware with the same controller and drives. The only difference was the OS. The tests on the left were executed under FC3 i386 and the tests on the right under FC3 x86_64 AMD64. You can see FC3 x86_64 AMD64 consistently destroys FC3 i386.

    The server itself was a Dual Opteron with 4GB of RAM and a 64bit PCI-X 3Ware Escalade 9500S-8 hardware RAID controller with 128MB RAM + BBU and 8 drives in RAID5.

    As you can see Linux kicks serious butt until the 4GB of RAM is exhausted and then the performance drops off significantly which is to be expected.

    Keep in mind this is a high end $600 SATA RAID controller with a boatload of onboard RAM and a PCI-X interface (64bit x 66MHz) in a Dual Opteron server with 2 independent PCI-X buses and 4GB of PC3200 RAM. Regular SATA controllers do not get anywhere close to this level of performance.

    The original poster does not have PCI-X or a Dual Opteron. The controller will work in 32bit mode at 33MHz but you certainly won't see 4GB/sec transfers on a 32bit bus. Obviously the 4GB/sec transfers were purely DMA transfers. There is no way the controller can do 4GB/sec sustained.

    The real world performance is on the tail end of the graphs along the right edge.

    What does this demonstrate?

    It demonstrates that a quality hardware RAID controller can offer great performance, however there is NO way an onboard controller will get anywhere close to this level of performance.

    But here's the kicker the 3ware is actually closer to a SCSI controller than to an SATA controller - in fact it is my understanding it uses SCSI (and a SCSI driver) to communicate with the system - it is more like a SCSI to SATA bridge.

    A quality SCSI controller will offer very similar performance to the 3ware if not better, that performance will vary however depending on the controller used, the amount of onboard RAM if any, the type and number of drives, the host interface (PCI in this case), the RAID mode, the OS, the CPU type, chipset, amount of system RAM and several other factors.

    The amazing thing is going from i386 to AMD64 made a huge difference.

    Unfortunately the original poster has an Intel system which will behave differently even if it supports x86_64 (which it should if the CPU has 64bit extensions).
  31. Please see my clarification above.


    Yes I ran those myself - took several days! I used 4+GB and 16GB data files for the benchmark tests.

    Left = FC3 i386
    Right = FC3 x86_64

    Dual Opteron 4GB RAM PC3200 Corsair XMS REG ECC

    3Ware 9500S-8 8port SATA HARDWARE RAID CTRL with 128MB RAM+BBU PCI-X 64bit x 66MHz

    8xSATA 7200 RPM HDDs in RAID5
  32. Hmmm, honestly I am at a loss as to what you are saying.

    What does this have to do with SCSI bottlenecking on PCI?
  33. 8) what proggy did you use fro the 3d charts? Is it freeware? :lol: I like them nonetheless, but would be better with descriptions beside them. :wink:
  34. Cool.

    I would love to get my hands on a system like that. (Even if my linux skill are ashamadly lacking :oops: ).

    You are right though, the OP wouldn't see anything close to that being on an Intel system, and using Windows.

    I don't want to debate SCSI vs SATA, cause SCSI is definately faster, more reliable and (usually) louder. I am just wondering what the OPs motivation for going SCSI is, as it is generally more expensive, and for daily real-world use, there will not be a greatly noticable difference.
  35. Quote:
    I don't want to debate SCSI vs SATA, cause SCSI is definately faster, more reliable and (usually) louder.

    LOL, my whole point is that IN THIS CASE, SCSI will most definitely not be faster. Are you guys asleep or something?
  36. Good question!

    We are still comparing apples and oranges.

    My argument was that not all SATA controllers are the same.

    It also depends on how the SATA controllers are glued to the system, there is only so much bandwidth to go around.

    Anyway the moral of the story is: sometimes you do get what you pay for.

    A lot depends on software and drivers as you can see in the graphs 64bit Linux crushes 32bit Linux.

    If you pay close attention to the tail end of the graphs you will see that the sustained performance is a lot lower than the peak performance when the controller is performing CACHE to system RAM DMA transfers. The bandwidth is about what 32bit PCI can actually provide with 8 drives. I would argue that a single U320 SCSI drive attached to a PCI SCSI controller would not be able to completely flood the PCI bus. I would expect it to peak around 80-100MB/sec maybe a bit more or a bit less.

    For the record random access performance was fairly poor on the 3ware SATA CTRL and even poorer if the drive were asked to perform multiple operations at once. In that scenario SCSI does way better.
  37. ??? I still don't understand what relevance showing how Fedora Core 3 32 bit is so much slower than Fedora Core 3 64 bit?

    I thought that we were talking about a single 15k rpm drive on a PCI bus that is contending for bandwidth from another device on the PCI bus, vs. 2 raid 0 raptors on a contention free link directly to the ICH6 hub on this guy's motherboard? And the performance issues of that set up?
  38. There is something else to consider here as well. The OP will really benefit from having a PCI-X U320 SCSI controller and drive if the OP decides to upgrade to a server board with PCI-X in the future.

    PCI-X server boards are going cheap on ebay and they are getting cheaper too overall.

    Even with a PCI-X CTRL running at 32bit x 33MHz with one drive performance should be really good.
  39. And DON'T say again that RAID 0 is suicide, because that is just plain retarded. The guy himself said that he uses storage on other high cap IDE drives. This is pure performance.
  40. RAID0 does increase the possibility of a failure by 2
  41. Quote:


    The OP expressed interest in using 1 15K RPM SCSI drive to boot off of and one or more IDE drives for storage.

    I think that makes some sense.

    The OP also said it would NOT be RAID'ed.
  42. LOL, *sigh* Although it is true, critical data is not lost because the SCSI or SATA solution is for the system drive. Storage drives are being handled by other means. As stated by the OP.
  43. Out of 40 7200RPM SATA HDDs from Seagate 20 of them died on me while in RAID 5 (thankfully not all at once). Had the arrays been RAID 0 instead of RAID 5 ALL data would have been lost.

    In RAID 5 you can afford to lose one drive and keep going - granted at reduced performance.

    A RAID 5 rebuild can take HOURS and can seriously degrade performance, I can testify to that from first hard experience.

    On the 3ware an init takes about 7-8 hours, a rebuild can take 7-14 hours depending on load.

    Sometimes RAID is more trouble than it's worth
  44. And if he loses the SCSI drive all data is lost as well.


    Can we get to the Point?
  45. Which is why I recommended a hybrid solution SCSI boot + SATA storage.

    I always advise clients to have at least n+1 backups.

    The OP is strongly encouraged to rsync the data from the SCSI boot drive to the SATA storage drive for backup purposes.

    For those not familiar with rsync, rsync is a differential backup tool and much more (runs on Linux, BSD and windows).
  46. I have thrown down the gauntlet and you keep dancing around the original point.

    On this guy's motherboard....
    SATA 2xRaptor in raid 0 will be faster than a 15krpm scsi.

    Which was my original point to begin with.

    Edit: It was also hergieburbur's point before mine as well.
  47. Honestly, on this motherboard, recommending any type of SCSI is just plain stupid. I did however give some links to great prices on refurbs, but from a pure performance standpoint ON THIS MOTHERBOARD, SCSI OF ANY PRICE does not have the ability to beat the performance of 2 raptors in raid 0.

    Yes you read correctly, I said OF ANY PRICE.

    Give me a link to a PCI-Express SCSI card that will work in this guys system.

    I dare ya. :)
  48. You are correct, your sequential access would be faster with 2 10K RPM Raptors in RAID 0 rather than 1 15K RPM SCSI drive.

    Your random access will likely not be any better. And your actual performance under load probably won't be better either.

    So: $speed+=1; $reliability/=2;

    I have had so many things go wrong with RAID arrays that I cannot in good conscience recommend RAID 0 to anyone.
Ask a new question

Read More

Hard Drives Storage Product