Sign in with
Sign up | Sign in
Your question

Gigabit Speed (Sandra) slow?

Last response: in Components
Share
March 19, 2004 4:56:22 AM

I just ran a Sandra benchmark (2004.10.9.89) on my network between two computers on the gigabit segment, and found that it rates my bandwidth at only 36 MB/s (it claims a 1000Base-TX GB full duplex should be able to do 117 MB/s) so I'm trying to figure out what the problem is. Anyone know what could be causing the slowdown?

Here's my setup:

Linksys EG1032 V2 gigabit ethernet cards, plugged in 32-bit 32mhz PCI. One in each PC.
Linksys Instant Gigabit ethernet switch (EG005W)

Both computers should be plenty fast enough, one is an Athlon XP 2400 on nForce2, 1GB ram, the other is a P4 1.5 on some random Intel board, 256MB ram. The two computers have one gigabit ethernet card each, plugged directly to the switch. The switch uplinks to a 100 megabit router port, but that connection isn't used in the test, it goes between the two computers on the gigabit switch. One PC connects with a short, 10-foot cable, the other connects with a 50-foot cable from across the room.

I'm about to try the test again without the switch uplinked, I'll see how that goes. Hope somebody has a suggestion because I'm hoping to begin using a RAID drive over my network for fast storage, and this problem would completely defeat that.
March 19, 2004 5:44:52 AM

Just to add, I tried it again both with some different layouts:

the switch unplugged from its uplink and only plugged to the two PCs. (same result)
the switch removed from the segment, and just a single Cat-6 cable between the two NICs. (same result.)

Don't know what could be slowing down these NICs. They're advertised to be gigabit network cards, and I paid a decent dollar for them, so they should be able to get at very least 80 to 90 MB/sec, right? I mean, even if they got 100 MB/sec like Sandra says they should be able to, that would only be 80% of the theoretical maximum line capacity of gigabit.
March 29, 2004 4:01:18 PM

You have to remember that unless you have a RAID array or Raptors your hard disks aren't gonna be transferring data at more than about 50MB/s (Thats for a high end 7200RPM 8MB Cache model too) so basically your pretty much limited by your HDD. Also, The NIC is sharing the banswidth of the PCI bus, which is 133MB/s (thats theoretical max, really probably about 100MB/s after overheads), so if you add up 40MB/s for HDD, and about 10MB/s for the rest of the PCI stuff, you get only about 40MB/s left over. Basically unless you have CSA (is that what its called?) and a decent HDD subsystem your not going to get much better performance.

EDIT: I just noticed your post in the other threaad. Looks like you do have a good HDD system. But unless you have CSA you are still limited by the PCI bus.

<A HREF="http://service.futuremark.com/compare?2k1=7454540" target="_new">Yay, I Finally broke the 12k barrier!!</A><P ID="edit"><FONT SIZE=-1><EM>Edited by tombance on 03/29/04 05:02 PM.</EM></FONT></P>
Related resources
March 29, 2004 7:53:32 PM

I didn't notice any HD activity during the test though. It tested in about 1/4 second, and without any read/write activity.

What I'm trying to figure out is if having a gigabit NIC connected to a server with an enormously fast RAID drive will be as fast as having a RAID-0 drive on the computer itself.

This will be a pretty standard computer with a single HDD, connecting through gigabit LAN to <A HREF="http://bkgrafix.net/filepile/book1.xls" target="_new">This</A> setup, with 133mhz PCI-X Raid-0 and four Raptors.

I know it will not get full speed from that array - I expect the gigabit LAN to max out at about 100MB/sec, and I know for sure that the 32-bit PCI bus can handle a RAID card that runs at that speed. I had one set up before, I just had to take it down because I got a raid-5 volume and the raid-5 card was incompatible with the raid-0.

The parts should be getting here soon, so the analyze-and-predict phase is over, I will just have to test it when it arrives.

What would help is if anyone knows how to create a RamDisk in windows XP. Mapping a network drive and copying a large file to ramdisk would be an excellent way to benchmark the actual performance of the system, since all operations would be between network and RAM as if the network drive was the computer's local drive.
September 18, 2004 12:40:00 AM

Tried to delete...wouldn't let me.<P ID="edit"><FONT SIZE=-1><EM>Edited by darfur on 09/17/04 08:43 PM.</EM></FONT></P>
September 18, 2004 1:23:32 AM

Here is a link to what looks like a quick and easy way to make a ramdisk: <A HREF="http://users.compaqnet.be/cn181612/RAMDisk/RAMDisk.htm" target="_new">http://users.compaqnet.be/cn181612/RAMDisk/RAMDisk.htm&...;/A>

I think using a ramdisk will be the only way to get full Gigabit connection speeds. All hard drive controllers are going to eat at the pci bus and that usually kills the speed down to half or less. I myself have never seen a gigabit transfer over about 45mb/s.

<A HREF="http://www.folken.net/myrig.htm" target="_new">My precious...</A>
September 18, 2004 4:51:43 AM

I figured out what the problem was ages ago... I've given up on the network forum, hardly anybody comes here at all. One post every few weeks or so.

The problem relates to some kind of overhead in the TCP protocol and the PCI bus. Basically with normal 32-bit 33MHz PCI, no network card can communicate at higher than the upper 30's to lower 40's MB/sec. I get transfer rates much closer to 100MB/sec between two computers with PCI-X gigabit cards. PCI really should have died and vanished a couple years ago if not sooner. I welcome PCI Express, because it might bring some better consumer-level serving capabilities to the standard PC. The main advantage of those super-expensive server boards is their PCI-X slots that allow gigabytes per second of throughput instead of the measly 128MB/sec cap of PCI. Finally we get a bite of that performance in motherboards that cost less than $400.
May 2, 2008 5:03:15 PM

I will have to do my own tests. But the speed of the PCI Bus should be about 125MB/s bytes not bits.

The reason I disagree with you about 30-40MB/s when using PCI 1000 nics is that I have run across several forums today of people who get 50-70MB/s when using Intel PCI Pro 1000 nics and Sata 7200rpm HDDs with no RAID.

So it's likely that the 50-70MB/s is the bottleneck of the non-RAID Sata setup and NOT the PCI nics. I ran across this thread because I am now using Intel Pro 1000 nics in XP and I have fast SCSI 15k drives (not in RAID) and also 7200rpm Sata3.0 drives and I am only getting about 20MB/s.

Sure it's better than the 11MB/s I was getting before the Gigabit nics. But would like to have the HDD speeds be my bottleneck and not the gigabit network. Qcheck uses memory and not the drives, so I know it's not my drive's that are the bottleneck. Also did an HDD speed test and the Sata drives get about 58-66MB/s steady depending on how much fragmentation I have at the time. I could probably get this number higher, but this is fine.

But 18-20MB/s for my Gigabit network as the speed from memory to memory XP<->Vista is sad. Even if it is about 2x the speed I had before. I want my 58-66MB/s HDD to be the bottleneck.

And again, I know it's not the PCI bus.
!