I read your recent reply with interest. I will reply to your observations in order, but because I can't get a convenient "quote" function, this may seem a little disjoint, but it isn't.
I know that this doesn't apply to much of the readership, but I think the basic line of though - the ideas - may be of some interest to those who would at least like to think about this type of system. I don't like the money I've had to spend, but it has paid me back, with the uptime I need without problems. All this is, of course, my opinion, but I feel it is an informed opinion from 25 plus years of experience, some associated with amazing screwups that embarass me to this day.
I've used dual processors since 1994, starting with the AMI Titian3 board, and have always enjoyed the improved response. The separation of most i/o functions in W2K
makes it even more effective, and as Mark Minasi noted in one of his seminars, W2K is really designed to use SCSI..." ide just drags it down. Some of my software also uses it, so I'll stay with dual processors, though I may see how a single 3+ghz one will work with some of it.
With dual processors, W2K seems to divide up the basic tasks, ands you are allowed to assign certain tasks to certain processors. I have found, by default, at least in the machines I have used, I/O is on one processor, and the apps run on the other [along with certain services], as shown on the performance tab of the task manager. This makes a Lot more efficient use of the 2 cpus, as the response on the GUI is much smoother, especially with hardware [or, to a lesser but certainly noticable extent, software] raid. It greatly improves software raid, which has it place in certain situations. Interupts can be handled while the rest of the machine hums smoothly along. This might be attributable to the programs I tend to run, but I've seen at least some of it on ALL the dual cpu machines I've used. Oddly enough, Most of these machines had some form of scsi on them, with that wonderful disconnect function.
The difference is readily noticed when heavy network or file i/o is going on while some app is working, and disk i/o is occurring. A situation common on my system.
The O/S overhead does not interfere with the running apps. SCSI is important because it doesn't hold the buss while waiting for the drive to deliver the data. Along with user defined readahead, a 128mb cache and other features in the onboard the controller [technically a host adapter, but I use controller because its easier], reads and writes ar handled very efficiently. The fastest ide drives stll have the habit of holding the buss, and stop everything else. IDE raid may be ok for somethings, but this buss siezure situation, their basically inefficient handling of computer rosources, along with the fact that they are not designed for 24/7 operation leads me to think that I will keep my hardware scsi systems on my workstation, as I do ECC on my MB.
I don't understand what you were trying to say about the adaptec controllers. I configure my raid setup thru a configuration utility on the board which comes up with a <ctrl>A at bootup - NOTHING to do with the O/S. There is a windows app, but I just use that to see what's going on with the system when I'm up.
What do you THINK I'm expecting from my system?? I get lighting fast loads from the mirrored 8mbcache 15k u160 drives, along with the reliability of duplexed mirrored drives [1 drive on each channel]. WHAT really doesn't work "...all that well"? It works very well, as far as I have experienced. My son's newish game machine is outrun by my 2-year old clunker.
As for scsi speed, you are missing one point - SCSI is a SYSTEM, drives and controller. The controller, as mentioned above, provides much of the performance, besides the speed of the drives. I notice a "speed increase" because of the on-board cache, transferring at whatever speed the buss can handle, and it can usually handle a two drives at or a little below rated speed. Read ahead often
provide the next read in the cache, further reducing the time necessary to access the requested data, all without locking up the buss for the duration. 128mb of ecc holds a lot of data. Get the idea ?? ;-)
More may not be enough, but a little better than enough gives you some headroom, and allows you to better maximize whatever "juice" your system has. However, in cost-sensitive systems, it isn't practical. A scsi system disk might be effective, however. I find this not to be true on systems with IDE, so it might not work. When my ide DVD burner was running, the system slowed down noticeably. I now have all this on a separate little machine.
As to servers/workstation and backup, it depends what you want your system to do. You are assuming a lot as to what my system does and what will maximize the features that I find more important. You do those first, and do our best with the rest after they have been attended to. I have invested in the workstations because that's where the work is done, and my work is both processor and disk intensive,
along with the fact that it's often very costly to replace, if it can be replaced at all.
You are trying to put your idea of how the out-of-the-box server/client model should be implemented over my workspace. A rigid adherance to any one model prevents you from maximizing the reliability and efficiency of the system.
IMHO.
Why won't my controllers do what I think when a drive fails?? They always have . What do YOU think they should do, and what do you think I expect? Again, why you keep saying that IDE is better than a scsi subsystem is escaping me. I just don't understand how you think any ide system ll be faster than a 15k U160 raid5 on read, which is most of what my system's workstations do. Writes aren't all that slow, either, with another built-in function of scsi controllers, command queing.
I definitely need a faster MB, and I will address that early this spring. Things in the network world are no longer simple, so I have to go somewhere and learn a lot about new technology I will work on this.
I WAS the outside consultant you talk about, and I was employed by several F500 companies and the government. That was some time ago, but though the hardware changes, the underlying concepts remain pretty much the same. Being semi-retired, I will have the time.
My hardware isn't out in "left field" - it will deliver more efficient and reliable operation [except for the lan upgrade, which is why I came here to ask for some help] than the standard system you are recommending. You are making a common mistake - assuming one size fits all.
I have found this not to be true. A judicious selection of equipment can fit much better if you can think in a creative manner and select the from the features available to achieve you goal. I hate the phrase, but "...outside the box" sums it up.
And there another aphorism that I feel appies - "you get what you pay for", usually, if you are informed when you shop. Most readers here know this, and I'm sure usually try to maximize their system with what they can afford.
Hope this makes my ideas clear.
IMHO, FWIW, YMMV [for sure]
Got to go now, data to crunch, MB's to upgrade, networks to learn about.
Thanks , sorry about the disagreement, and the poor spelling. I hope we can agree to disagree [anothe icky bunch of words]
I'm going to fade away now, but I will lurk.
..
-=ed