The article is a good start, and much needed. I have had a hard time finding good performance data on fast and gigabit Ethernet cards.
I have my doubts about Qcheck. It probably reflects the performance of OS + driver+ OS settings + NIC settings + NIC accurately enough, and thus representitive of real world performance. But, not pure NIC or NIC + driver. I've gotten sustained transmit rates in the 95 mbit/sec. range on Linux using modified versions of DoS programs that send RAW packets. The Beowulf crowd had favored NICs based on DEC's "Tulip" chip (now owned by Intel) as having the best performance at 94mbits/sec. Its an old chip, but still found on many cards. On Linux and Solaris, load average doesn't reflect how much CPU time is spent servicing interrupts. The load may look low, but the system feels slow with long command response times and slow mouse tracking. Are your windows numbers accurate?
I second the suggestion of testing less expensive NICs - the $10 Realtek based ones, the $20 National based Netgear, and others in between. They can't break your budget if you have to buy them yourself - that could save you money by taking less time than contacting and hounding makers for loaners and the return shipping. Copper Gigabit cards based on National's chipset only cost what single-port "server" 10/100 cards recently cost. I have a pair at home, and they speed backups. The cost is still half of what gamers spend twice a year on graphic card upgrades. Also very interesting in that price range are Alacritech's NICs which offload 90% of TCP processing from the NT/2000 OS.
Just as drivers make a huge impact on graphic card performance, they impact NIC performance. They also compensate for chip flaws. Read the Linux sources to see which chips are good and which are bad. I suspect all the drivers could be a little better, or at least tuned with optimized parameter settings.
Now that small switches are so affordable, hubs should be avoided. Current 100mbit cards still suffer from channel capture from a flaw in collision resolution that IEEE decided not to address because switching is nearly as cheap as bridging.
Keep up the good work and lobby for multiple/wide/fast PCI busses on "consumer" motherboards!