From my experience, SCSI used to be the only answer for performance issues in high-performance workstation/desktop systems. This is no longer the case.
The benefits of SCSI over IDE are:
Less CPU utilization - less of a factor now with high performance CPUs and UDMA.
More devices per channel - SCSI supports 15 drives while IDE supports 2 - This is often overcome in IDE RAID adapters by using 2-4 or more channels. In a workstation RAID array, more than 4-6 drives will overpower the PCI bus regardless of SCSI or IDE.
Higher RPMs=faster seek times - this is only useful in database server type systems (unless you do an extraordinary amount of text searching or the like on your PC).
Higher total bus transfer speeds - more drives makes for higher bandwidth needs (only helps if your PCI bus is capable of supporting higher speeds and 64bit IDE cards are available that support 4 X 133MBps.
The benefits of IDE over RAID:
Lower cost per GB - ~$200-300 for 18GB vs. $200 for 100GB.
Lower cost for RAID adapter.
Basically, unless you have database server needs, you're currently better off going with IDE. I used to buy all SCSI, but IDE is now good enough that you can spend <$800 and get 300GB RAID 5 (4 X 100GB - 100GB) for redundancy and speed or 400GB RAID 0 (4 X 100GB) which would be faster than any 2 drive 15K RPM SCSI RAID array.
I thought a thought, but the thought I thought wasn't the thought I thought I had thought.