I am the author.
>I also find it dancing with vagueness as I'm trying to narrow >my parts search. Are you really suggesting we use PCI-X server >motherboards? Why? (Besides the fact that their bandwidth is >separate from normal PCI lanes.) PCI Express has that same >upside, and is much more available in a common motherboard.
There are many possible choices in building a fileserver.
I chose to use motherboards that I already had. Sure
PCI-E is faster and if I were buying all new hardware, I would
use it. However, for those on a budget, $40 will get you a
used motherboard, with 2 cpu's and lots of PCI-X slots at
http://www.surpluscomputers.com/348725/accelertech-tsunami64-dual-amd-opteron.html
You explain the basic difference between fakeRAID and "read RAID" adequately, but why should I purchase a controller card at all? Motherboards have about six SATA ports, which is enough for your rig on page five.
True, the rig on page 5 only had 6 hard drives, and so a
mb with 6 sata ports will work. However that motherboard
has no usable sata ports. Additionally, I have since added
a 7th drive, and the racks allow me to have up to 10 drives.
Clearly, if the mb has enough ports you don't need a controller
card, but I mentioned controller cards for those situations
when the mb doesn't.
>Since your builds are dual-CPU server machines to handle >parity and RAID building, am I to assume you're not using a >"real RAID" card that does the XOR calculations sans CPU? (HBA >= Host Bus Adapter?)
Correct. The real raid cards start around $500, and can easily
cost $1000.
>Also, why must your RAID cards support JBOD? You seem to >prefer a RAID 5/6 setup. You lost me COMPLETELY there, unless >you want to JBOD your OS disk and have the rest in a RAID? In >that case, can't you just plug your OS disk into a motherboard >SATA port and the rest of the drives into the controller?
I said the controller card you buy has to support JBOD.
The Rosewell card calls itself a RAID card (raid 0 and 1),
but it supports JBOD, so you could use it with software RAID.
If the card doesn't support JBOD, you won't be able to use it
with software RAID, (like the SATA controller on the NCCH-DL).
I have all the disks (except the OS disk) using software RAID.
>And about the CPU: do I really need two of them? You advise "a >slow, cheap Phenom II", yet the entire story praises a board >hosting two CPUs. Do I need one or two of these Phenoms -- >isn't a nice quad core better than two separate dual core >chips in terms of price and heat?
When I said CPU, I meant processor. I didn't say two sockets.
You want more than one processor so that one can do the raid
xor calculations. For software it doesn't matter if the
processors are in one or more sockets. If you are buying new
hardware, it is easy to get more than one processor in a single
socket, which is why I recommended the phenom II, with it's 4
cores.
>What if I used a real RAID card to offload the calculations? >Then I could use just one dual core chip, right? Or even a >nice Conroe-L or Athlon single core?
Sure. The only problem is cost. A real RAID card will cost
more than virtually any CPU. If my motherboard fries, I can get
another one, and plug in all of my drives. A new mb and memory
for me will be $100 to $200. If a RAID controller breaks, you
will need to buy another one, which costs $500 - $1000.
There is much more flexibility with software raid and it is
cheaper.
>Finally, no mention of the FreeNAS operating system? I've >heard about installing that on a CF reader so I wouldn't need >an extra hard drive to store the OS. Is that better/worse than >using "any recent Linux" distro? I'm no Linux genius so I was >hoping an OS that's tailored to hosting a NAS would help me >out instead of learning how to bend a full blown Linux OS to
It is certainly a viable option. It is a bit less flexible than
a full linux system, but much easier to set up as you point
out.
>Thanks for the tip about ECC memory, though. I'll do some >price comparisons with those modules.
It isn't much more expensive.