Sign in with
Sign up | Sign in
Your question

gigabit efficiency question

Last response: in Components
Share
December 8, 2003 2:19:37 PM

How efficient should a decent gigabit net be??

I am missing something, I'm afraid, as I only get about 64 mb/sec, or around 6-8% effiency, unless I missed a power of ten somewhere

I used the PASSMARK network test and got these figures for 2 machines [cat 6 cable, cpu usage approx 35% for each] over 30 seconds.

I have 2 intel pro/mt1000's, through a trendware gigabit switch on a network with only these on it.

I have offloaded all calculations/checks as possible, and tried jumbo frames.

It doesn't seem that about 7% efficiency is correct.

any ideas what I'm doing wrong, or a better way of checking??

It certainly seems a lot faster, but I must be screwing up my math somewhere

.
.
-=ed
December 8, 2003 9:57:21 PM

Sounds about right.
Gigabit really isn't meant for connections to computers, but mainly between servers and backbones.
You won't transfer that much because your computer won't be able to process that much information.

Let me explain it this way:
I assume you transferred a fairly large file from computer A to computer B, hard drive to hard drive right?

I'm guessing you're using a standard IDE hard drive, ATA66/100/133 with a 2MB buffer on that HDD.
You've reached the maximum read/write ability of your hard drive which is limiting your connection speed.

Understand now why gigabit to each computer isn't worthwhile?
Now, between two SCSI UltraWide 160 drives running at 10,000RPM you'll have a different story, which are almost exclusively used in servers.

Riser

<P ID="edit"><FONT SIZE=-1><EM>Edited by riser on 12/08/03 06:58 PM.</EM></FONT></P>
December 9, 2003 12:25:56 AM

Yes, disk to disk, but not just any disks.

10k scsi hd to 10k and 15k scsi hd, 2chnl raid0 to 4chnl raid5 u160/320 scsi drives. The drives are all 8 or 16mb cache, the controllers are adaptec 33mhz/64bit 3200/3400 with 128mb of ecc cache.

I really don't think there is a disk subsystem bottleneck.

IDE stuff is pretty poor, even the new sata stuff, so I don't use them, except for some static stuff that's nice to have online.

So all my 6 workstations and 2 servers have scsi raid systems, except for 2 Compaqs that are mostly internet machines

I've been doing this for a long time, and have been pretty good at figuring out what's worthwhile, and my 8 computer development net hums along quite nicely. The GB network has been a big help in moving things around, doing timely backups [I won't buy any more tapes, as they are always behind the curve, unless you want to spend 5k for a setup that may be a pain to use in a year, unless you get an auto loader, which are touchy and expensive to fix.

Assumption is the mother of all screwups...;-) ;-) ;-);-) ;-) ;-);-) ;-)

So, is there any place that can give me some facts on how to optimize a gigabit network that has a data system that can deliver the bits to the nic??

thanks for the words, and any help or direction you can give.



.
.
-=ed
Related resources
December 9, 2003 3:30:57 AM

There isn't much to optimizing your gigabit connection that would make a huge difference.
You haven't listed your OS or system specs besides your controllers and drives. It really sounds like you're just throwing things out there and have no clue what you're really talking about.
Your systems are the bottlenecks in the efficienty of your network. You may have the controllers to handle the throughput, but your systems bus won't be able to handle that much information which causes the bottleneck.

Its odd for someone to also call IDE equipment "poor" because its main stream, heavily used and the only time you see SCSI is in a powerful workstation (ex. Rendering machines) or servers. You very rarely if at all see SCSI in your standard workstations. On top of that, SATA wasn't designed to give an immediate improvement over IDE hard drives. Its a serial cable afterall.

What are your system specs?
Mainboards, bus speeds, memory configuration, etc.

RAID 5 configurations are considerably slower than the RAID 0 which can also limit your network.

As for tapes, SDLT tape will allow you write 200/400GB compressed/uncompressed in around an hour or so depending on your network configuration.

It honestly sounds like you have no clue what you're talking about and seem to be here for bragging rights alone.

If your systems can handle what gigabit offers, you're ahead of 95% of businesses and the like even with a handful of systems.

Maybe your gigabit switch is the problem?
Is it a managed switch?
Have you set your ports?
Auto Negotiation doesn't always work as stated.
Have you set your NICs to full duplex, 1000Mb?
What distance are you running your cables?
Have you certified your cable or tested them with a cable tester to verify they can even do gigabit? I understand you're running cat6, but that doesn't mean you're guarenteed gigabit.

What I'm saying is that if you'd like help in this forum, you'll need to list more information about your systems and your setup so we can get an understanding of what is going on and potential problems.

Riser
December 9, 2003 5:31:09 PM

I am sorry if there was an unfortunate tone to my my post, but *I* thought it was dryly humorous. I was chuckling when I read your first reply. I wasn't trying to be arrogant or such with my reply.

I have upgraded the network and the jump to gigabit was only a little more for the size [***4 various workstations; 2-tyan s2460, 1 IWILL KK266r, 1-IWILL KK266plus-r w/ specialty printer and scanner, serverIWILL KK266plus-r,Shuttle bkp server w/ fax and static data storage, 2-compaq 933mhz pIIIs***, 2-networked laserprinters] running W2k.SP3, both server and workstation. Nice and simple.

It now certainly seems significantly faster, and I am trying to find what transfer rates a gbLAN can provide in theory, and what you can actually expect to see. I ran some tests that gave something like 6.5-8mb/sec. I think that I should see something better, but I have no reference to draw any conclusions. I do not get any errors or dropped packets reported during transfer, so maybe something is set wrong, or two settings are fighting each other. My disk subsystems can deliver substantially better rates than this. I want to see where this system stands in efficiency of transfer, and maybe tweak it best I can...

I tested thru an inexpensive trendware 8-port switch, [with an 8-port linksys 10/100 switch for printers, laptops, etc] with 2 tyan s2460 w/2 mp1800,1gb.eccddr memory, 4chnl adapatec 3400s w/ 128ecc cache, 2-15k36gb.R0 for system stuff and 5-10k36gb.r5 data arrays and INTEL pro/1000MT. They were the only thing running on the switch during the test.

I have not tested the cables, as I cannot easily get the equipment to do so any more. However. the INTEL software says it is doing full-duplex gigabit at rated speed, and seems to get the right cable length. Four are connected with 25ft cables, including the two tyans used for testing, one is 75 ft, and one is 100ft, all CAT6. The switch is unmanaged, as the idea was to do this as inexpensivly as possible, and the price differential was very substantial. Maybe I need a better switch. If so, I still have the decent nics. How would you suggest I test for auto negotiation?? It is certainly faster than 10/100. Transfer a big file?

Sorry if I sound like I have no clue... I don't usually get accused of being an idiot. I have substantial publications, professional credentials, certifications and affiliations, and listed many of them at first, but I removed them when I realized I was getting caught up in this.

However, I will say I have [ghost]written a few chapters here and there in programming books, edited several specialty computer books for publishers like McGraw Hill, and was involved with the design and programming of two commercial packages, in Fortran and PASCAL, no less. I also have numerous published popular articles and a few techies, and had my own forum on tech matters on Ilink . Much of this was some time ago, but at the present, I help non-profits, teach disabled vets, write, and maintain this system.

Perhaps, in a rush, I came across differently than I had planned. I may not have a good grasp of the customs and protocols here. Again, sorry.

In the end, I'm just looking for ballpark figures of what to expect, and maybe some advice on what to try to improve things if possible, within the constraints of what is available to work with. I was hoping to find someone with some experience in ths situation that could help me.


** additionally, to comment on your observations:


I am not sure of what you mean by "setting my ports". Could you clarify? TCP/IP ??

I don't like any IDE stuff, because, in my experience, it constantly stalls the system on access, under heavy load and with large files, even with dma. It sometimes fights with SCSIdisks because of automatic drive-letter assignments, and seems to fail more often under continous duty, and usually has a poor warranty.

Hardware SCSI Raid5 arrays, hardware XOR, and large controller CACHE can effectively READ very quickly, and write only a little slower. And you've got to love the combination of speed, fault tolerance and storage efficiency it gives.

At the present time... In my opinion... With things the way they seem to me...





.
.
-=ed
December 9, 2003 8:27:20 PM

If you're looking to get a transfer rate of 200MB/s, you won't get it.
You're switch can be a problem because it may only be accepting your gigabit connection at 100Mb, even if its stating you're at 1GB.
Setting you ports to full duplex, 1000Mbit:
Device Manager - Properties on your NIC - possibly an Advanced tab will allow you to hard set your NIC. More than likely its set to Hardware Default or Auto Negotiate.
You might want to bypass your switch to find out if that is your limiting factor. By being unmanaged it and more of an inexpensive piece of hardware, it might well be your switch. Use a crossover cable and see if your data transfer rates change.

Ok I just looked up your tyan s2460 motherboard. You're limiting yourself to a 266Mhz FSB which is designed to write to ATA100.
Your system setup should be awesome for doing work on the PC with that motherboard, but its not designed for gigabit networking. Your motherboard is probably a strong factor in your reduced transfer speeds.

Your data is coming in through your PC, going through you motherboard to your Adaptec RAID controllers then finally passing on to your SCSI drives. Everything you have is of a good quality, but your motherboad seems to be lacking for what you're requesting it to do.

I would look up a highend motherboard that boasts gigabit networking capabilites and compare.

I just read up on your motherboards and it looks like your current systems are relatively new but weren't designed with gigabit in mind or the near future.

Here is a quote from a Tom's Hardware article:

"Power users should keep Gigabit Ethernet in mind. While most controllers are able to transfer more than 100 MB/s, the PCI bus will quickly slow them down, as soon as it has to share the total bandwidth, 132 MB/s at 33 MHz, with other devices (which actually happens frequently). "

http://www6.tomshardware.com/motherboard/20030929/ddr40...

Not being familiar with your adaptec cards and the complete function, do they bypass your southbridge or do they still need to get processed there? If they do, I'm sure that will result in another bottleneck.

--
I was up late last night and I had put in an 80+ hour week last week. I'm still feeling the effects of it but I picked up on the wrong tone from your message. I apologize if my reply came back to you in any way other than helpful.

Riser
December 9, 2003 9:19:44 PM

what did you mean by "setting the ports"?

What current dual processor boards do you recommend ?

I used to prefer AMD, but the heat output is *phenomenal*, and fan noise becomes a problem. I have the servers downstairs with two other machines running on a 4 device kvm cat5 extender to keep the noise and heat away from my office. I feel that Intel cpu's are generally quieter and seem to be a little cooler, but what's current may be completely different.

I am planning to replace my current tyan with a contemporary board, and roll each "downhill", so do you have a feel for what's going to be the hot MB, with enough headroom to handle Gigabit, say, this Febuary??

Any great technical innovations due, of the kind that will extend the life and enhance the networking ability of my present system?

thanks for the reply



Can't keep up with this, so I do it every year or two.

.
.
-=ed
December 13, 2003 2:46:45 AM

Configure your NIC's Media type. Set it to 100 Full Duplex.

I would recommend a motherboard that has a faster FSB and maybe built in highpoint controllers to increase speed. I didn't spec any out, I haven't had the time as I've been working on my own new problem.

I have an AMD +1800 and the temperature runs at around 90 degrees, up to 120 under full load. My pentium 4 2.8GHz run a lot higher under idle conditions with extreme air cooling.
As far as fan noise, most coolers work for both AMD and Intel. I personally use a 50 dollar (at the time of purchase) Antec (can't think of the name, wind tunnel or something like that) fan with a copper core. Its extremely quiet and I found it on discount due to overstock for 20 dollars. I don't like using cheap fans to protect my CPU and motherboard from damage. I just had someone use a cooler designed for a 900MHz on a AMD +2000 which needless to say shorted out his motherboard, I haven't had a chance to check if the CPU protected itself yet though. I'm hoping it did, but I'm highly doubtful.

You'll want a chipset like the Nforce2 400MHz FSB where it can use the data throughput. You're getting the data to your NIC, but it can't put it anywhere until the bus opens up to allow it to pass the data on. Since you've invested in gigabit technology, you'll need to invest in a new motherboard with gigabit.

On top of that I found out today that my Dell's with integrated 10/100/100 Intel NICs won't boot to network with PXE because they have problems with CISCO switches, so I had to use a Dell PowerConnect to boot. Brand new technology doesn't work with the most popular switches.
It took the Dell Gold Support rep almost an hour to figure it out using all of their resources and calling their actually engineers to find the answer for me.

Riser
December 13, 2003 6:58:22 PM

OK...

So , with reliability and stability more important than overclocking, etc, and on GBnic not necessary, since I have a 10-pak of pro 1000/mts, what's the 7/9 sweet point MB/cpu combo that you would reccommend??

I prefer quality over flash. I like ecc if available, but this is not a deal breaker.

..


.
.
-=ed
December 13, 2003 10:17:04 PM

I guess I still don't understand why you're hooked on ECC memory? Its only used in servers yet you're looking to use it on your workstations? ECC is slower than Non-ECC.
I would pick any motherboard with a faster FSB. I wouldn't do a Xeon unless you're running it in your server, likewise with the opteron. Athlon 64 is out of the question too.

Before I could even guess what kind of motherboard you would want, I'd have to know what you're doing with your network.

I work at an A&E firm where Gigabit is first expected to take off, but with a good mix of computers and servers, we don't even need to touch it.

It sounds like you're just sticking with the top of the line stuff and you want all of you computers to be servers. Thats why you have Servers and Workstations. Each one has its own performance factors.

You're limiting factor is the FSB. a 866MHz/133FSB is faster than a 933MHz/100FSB. Why? because the FSB can push more.
P4 Dual Processors? Completely overkill unless you're doing 3-D modeling in Autocad ADT. Its close to overkill even for VB Studio and .Net work.
I think you should go back and take a look at what you need your computers to do and build them to spec that way and not build them like each one is a server. Just because you have everything designed like a server but using is as a workstation, doesn't mean its going to run good. It will probably be out performed by any comparible workstation.

Riser
December 14, 2003 1:07:30 AM

A first question: How do you get the message text in the reply?? Other forums have a "quote" button that does this.

I like the security of ecc, and I only have to have one type of memory around. My computer background cultivated overcompensation. Ecc is only slower on error conditions. During non-error operations, there is no difference for the same specs. A locked system can cost irreplacable data.

All my MBs are 133 fsb except for my bkp server, which is a legacy 1600xp w/ 512mb, fax, ide r0, on a shuttle GAX, which is all that system needs.

I like duals because they are smooth, and don't slow down near as noticably when doing several heavy duty tasks at one time, especially backups, data crunching and compiles.
SCSI also contributes to a smooth running under heavy load, especially with heavy i/o. I hate systems that "stick" or "stall".

I just would rather spend an extra $500 to give me, and myself and those who work for me, the best possible "experience", as work time is valuable, at least to me, and anything that intrudes on a train of thought could have significant consequences for my organization. I find that top-of-the-line equipment often give the same highest level of reliability and performance. My main server runs 6-9 months without needing to be shut down except for upgrades. I like this, and want the machines that are used everyday as reliable. Unfortunately, I'm locked in with WinIntel, so I have to be fanatical.

I think Andy Warhol said "... just because you're paranoid doesn't mean they aren't after you..." ;-)

The network allows data to be transferred in a timely manner, in the most transparent way possible. I use "synchronized" folders to automatically update folders on each machine to 2 file servers, and to other relevent machines. Content is what we do... we analyze often originate very special forms of data with proprietary software. I like to backup everytime something changes, and even though we are small enough to do this, this needs bandwidth. Things are getting out of hand again, and things have to be upgraded. The network is first, and then the computing hardware to match the network and computing needs.

I'll stay with u160 r0 and r5 for workstation system and storage, and mirrored ide for a local backup to be synchronized to the servers whenever things are slack. This is the safest system I can afford that gives me the flexibility I need.

BTW, Visual Studio responds well to dual processors.

I don't feel that they are all built like servers. They are all built to be as RELIABLE as servers, with differing specifications as to storage and memory size. I have been in this dogpatch for a long time, and I have lost too much original work as it is, and have decided to prevent as best as possible within spending limits. You are not the first to voice an opinion as you have, and it is probably good to hear it again, to keep me "grounded".

I am not in a race, I just want a fast reliable machine, and now I want to learn how to maximize the gigabit interface with them, to further the reliability of the system to safeguard our workproduct.

Iam leaning to upgrading all the motherboards except the main server as you have suggested. I will be looking at Tyan or some other quality dual cpu MBs for the 4 workstations, and roll the replaced MBs down to the backup server, and maybe an experimental unit for trying out software on my lan without worry, and maybe to my son and wife.

I have looked into it, and will probably wait until March when several speed enhancements will probably be at the 1.1 level, and start the upgrade. I'll be able to put aside funds to do it at one time by then without discomfoft. I hope to learn more in that time and make a optimal decision.

I thank you for your information, and will hope to continue this discussion, as you seem pretty straightforward about what you think, and why. Very valueable type of information.

hope you, or others if they wish, reply



.
.
-=ed
December 17, 2003 12:50:58 AM

I don't have time for a long post but I'll sum a few things up.

Dual Processors are only good if the software you're running supports them. It doesn't make everything run better. I'm not aware if you know that or not, but unless you're certain the programs you're running support dual CPUs, it won't do you any good.

As far as your adaptec contollers, if I recall those only support software mirrors and RAID5s not not hardware based.
Meaning you had to install your OS then you had to configure your mirror/RAID through Windows instead of creating your mirror/RAID then installing Windows?

If that's the case you won't get what you're expecting from it.
If your main hard drive holding your OS would fail, your software mirror/raid would probably be lost because your OS was controlling it. Sure you can try to rebuild/recover the RAID after reinstalling Windows, but have you tried this? It really doesn't work all that well, if at all. If software is the case, you might want to test this out before something serious happens.

As far as IDE to SCSI. IDE and SCSI will both fail at close to same rates. SCSI is only better when you're running a hardware based mirror/RAID on a server. Other than that SCSI really doesn't give you much else except for a slight increase in speed, but with your current motherboards you won't notice. You're driving a car that can go 0-100mph in 6 second flat, but you'll never take the car faster than 20mph. Get the idea?

More is not better in this field.

Now, since you're running servers, standard practice is to back up your servers and not your workstations. Considering you have a small amount of workstations you might very well backup each computer. But, you're running servers so it's safe to say you're reading files off the server and not copying them down to the local machine to work on them. I mean, if you are you're defeating the purpose of your backup and risk possibility of information be overwritten. Which brings me to still not understanding why you have so much money invested in your workstations.

I've covered why your adaptec controllers (which are really just PCI cards that allow you to use SCSI over IDE, nothing really fancy) probably won't do what you may think they will do in the case of a drive failing.

Just to mention this, at work we're running a PERC-4 RAID controller on one of our servers and manage to pull roughly +700Mb from server to server.

My advice:

Drop the SCSI drives on your workstations. Find a motherboard that has built in RAID support, get 2 IDE drives run RAID1 (striping) for the best performance, but no data redundancy there. Get something with a higher FSB too. Your workstations will be fine.

As far as your servers, depending on how large the files are that you're dealing with you might be fine if you can deal with the performance of your gigabit lan not being up to par.
If you want to utilize the gigabit lan, you'll need to investigate proper hardware to increase that connection.

Contact a sales rep. at Graybar, Dell, or a local consulting firm to find out what they recommend.
Better yet, you may want to bring in an outside consultant to tell you what you need to get your network running extremely efficient. You'll pay for it, but from what I can tell you're so far out in left field with your range of hardware, you'll be saving money by getting the on-site advice of someone with strong experience in networks.


Riser
December 17, 2003 3:59:16 PM

I read your recent reply with interest. I will reply to your observations in order, but because I can't get a convenient "quote" function, this may seem a little disjoint, but it isn't.

I know that this doesn't apply to much of the readership, but I think the basic line of though - the ideas - may be of some interest to those who would at least like to think about this type of system. I don't like the money I've had to spend, but it has paid me back, with the uptime I need without problems. All this is, of course, my opinion, but I feel it is an informed opinion from 25 plus years of experience, some associated with amazing screwups that embarass me to this day.

I've used dual processors since 1994, starting with the AMI Titian3 board, and have always enjoyed the improved response. The separation of most i/o functions in W2K
makes it even more effective, and as Mark Minasi noted in one of his seminars, W2K is really designed to use SCSI..." ide just drags it down. Some of my software also uses it, so I'll stay with dual processors, though I may see how a single 3+ghz one will work with some of it.

With dual processors, W2K seems to divide up the basic tasks, ands you are allowed to assign certain tasks to certain processors. I have found, by default, at least in the machines I have used, I/O is on one processor, and the apps run on the other [along with certain services], as shown on the performance tab of the task manager. This makes a Lot more efficient use of the 2 cpus, as the response on the GUI is much smoother, especially with hardware [or, to a lesser but certainly noticable extent, software] raid. It greatly improves software raid, which has it place in certain situations. Interupts can be handled while the rest of the machine hums smoothly along. This might be attributable to the programs I tend to run, but I've seen at least some of it on ALL the dual cpu machines I've used. Oddly enough, Most of these machines had some form of scsi on them, with that wonderful disconnect function.

The difference is readily noticed when heavy network or file i/o is going on while some app is working, and disk i/o is occurring. A situation common on my system.

The O/S overhead does not interfere with the running apps. SCSI is important because it doesn't hold the buss while waiting for the drive to deliver the data. Along with user defined readahead, a 128mb cache and other features in the onboard the controller [technically a host adapter, but I use controller because its easier], reads and writes ar handled very efficiently. The fastest ide drives stll have the habit of holding the buss, and stop everything else. IDE raid may be ok for somethings, but this buss siezure situation, their basically inefficient handling of computer rosources, along with the fact that they are not designed for 24/7 operation leads me to think that I will keep my hardware scsi systems on my workstation, as I do ECC on my MB.

I don't understand what you were trying to say about the adaptec controllers. I configure my raid setup thru a configuration utility on the board which comes up with a <ctrl>A at bootup - NOTHING to do with the O/S. There is a windows app, but I just use that to see what's going on with the system when I'm up.

What do you THINK I'm expecting from my system?? I get lighting fast loads from the mirrored 8mbcache 15k u160 drives, along with the reliability of duplexed mirrored drives [1 drive on each channel]. WHAT really doesn't work "...all that well"? It works very well, as far as I have experienced. My son's newish game machine is outrun by my 2-year old clunker.

As for scsi speed, you are missing one point - SCSI is a SYSTEM, drives and controller. The controller, as mentioned above, provides much of the performance, besides the speed of the drives. I notice a "speed increase" because of the on-board cache, transferring at whatever speed the buss can handle, and it can usually handle a two drives at or a little below rated speed. Read ahead often
provide the next read in the cache, further reducing the time necessary to access the requested data, all without locking up the buss for the duration. 128mb of ecc holds a lot of data. Get the idea ?? ;-)

More may not be enough, but a little better than enough gives you some headroom, and allows you to better maximize whatever "juice" your system has. However, in cost-sensitive systems, it isn't practical. A scsi system disk might be effective, however. I find this not to be true on systems with IDE, so it might not work. When my ide DVD burner was running, the system slowed down noticeably. I now have all this on a separate little machine.

As to servers/workstation and backup, it depends what you want your system to do. You are assuming a lot as to what my system does and what will maximize the features that I find more important. You do those first, and do our best with the rest after they have been attended to. I have invested in the workstations because that's where the work is done, and my work is both processor and disk intensive,
along with the fact that it's often very costly to replace, if it can be replaced at all.

You are trying to put your idea of how the out-of-the-box server/client model should be implemented over my workspace. A rigid adherance to any one model prevents you from maximizing the reliability and efficiency of the system.

IMHO.

Why won't my controllers do what I think when a drive fails?? They always have . What do YOU think they should do, and what do you think I expect? Again, why you keep saying that IDE is better than a scsi subsystem is escaping me. I just don't understand how you think any ide system ll be faster than a 15k U160 raid5 on read, which is most of what my system's workstations do. Writes aren't all that slow, either, with another built-in function of scsi controllers, command queing.

I definitely need a faster MB, and I will address that early this spring. Things in the network world are no longer simple, so I have to go somewhere and learn a lot about new technology I will work on this.

I WAS the outside consultant you talk about, and I was employed by several F500 companies and the government. That was some time ago, but though the hardware changes, the underlying concepts remain pretty much the same. Being semi-retired, I will have the time.

My hardware isn't out in "left field" - it will deliver more efficient and reliable operation [except for the lan upgrade, which is why I came here to ask for some help] than the standard system you are recommending. You are making a common mistake - assuming one size fits all.

I have found this not to be true. A judicious selection of equipment can fit much better if you can think in a creative manner and select the from the features available to achieve you goal. I hate the phrase, but "...outside the box" sums it up.

And there another aphorism that I feel appies - "you get what you pay for", usually, if you are informed when you shop. Most readers here know this, and I'm sure usually try to maximize their system with what they can afford.

Hope this makes my ideas clear.

IMHO, FWIW, YMMV [for sure]

Got to go now, data to crunch, MB's to upgrade, networks to learn about.

Thanks , sorry about the disagreement, and the poor spelling. I hope we can agree to disagree [anothe icky bunch of words]

I'm going to fade away now, but I will lurk.


..

-=ed
December 19, 2003 6:50:28 PM

Does Passmark report megabits/sec or megabytes/sec?

Not exactly scientific proof but I was watching TechTV Screensaver recently. They were comparing 100 mbit vs 1000 mbit transfers. The 1000 mbit was really fast at transfering a 500 MB file. They didn't put a clock to it but I timed it at 13 seconds which makes for about 38 MB/sec

They were using 4 identical Falcon Northwest "LAN Party" boxes (except for the NICs) and just crossover cables.

<b>56K, slow and steady does not win the race on internet!</b>
December 19, 2003 10:52:43 PM

<S I I G H H H>

I guess I must have something wrong

I have recently measured between 14 and 17 seconds to transfer over a 260mb file

I'll have to work on it.

Thanks for the words...

.
.
-=ed
December 19, 2003 11:04:02 PM

Don't forget to factor in your hard drives' speeds.

A really fast IDE drive might be able to maintain 38 MB/sec for reading but not for writing. Single drives will surely average must lower.

Drive write speed will be your bottleneck regardless of gigabit ethernet's theoretical speed or real speed.

I'm sure the LAN Party boxes have nice RAID setups. The systems probably also have enough RAM to buffer a 500 MB transfer so disk speed wouldn't be a factor anyway.

<b>56K, slow and steady does not win the race on internet!</b>
December 19, 2003 11:12:33 PM

By the way, 260 MB in 14 seconds translates to 18.5 MB/sec or roughly 185 Mbits/sec, 18.5% of gigabit ethernet's theoretical max. That's much higher than the 6% or 7% you mentioned earlier.

I have a feeling that Passmark is reporting network transfers in MBytes/sec not Mbits/sec, so 68 MBytes/sec is not that bad at all.

<b>56K, slow and steady does not win the race on internet!</b>
December 19, 2003 11:46:46 PM

The drives are 15k 18gb U160 seagate cheetas, which have a external transfer of about 100mb/sec, but I don't know if this is the sustained rate. The transfer rate of the buss is about 260mb/sec, which means 2 drives would be able to saturate it. The controller is an Adaptec 3200 with a 64 bit buss and 128mb ecc cache.

I don't think these is are the problem

I think riser is right about the buss, because I don't think the drive system is too slow, but I'm not sure about the controller, because the adaptec is in a 64bit 33mhz slot. The adaptec manual and product sheets seem to be missing this info, even if only approximate.

I would think that one drive would be able to reach 50-60 mb/sec, two around 100mb/sec, but it isn't.

Have to get a faster board to see what happens, unless something else occurs to me that solves the problem.

thanks



.
.
-=ed
December 19, 2003 11:49:43 PM

I know. I think passmark is doing something I don't know about. It SAYS Mbit/sec, but that could be a mistake.

If it is a typo rather than in mbits it would read about 63 mb/sec.

oh, well... ;-)

thanks again



.
.
-=ed
December 20, 2003 1:15:22 AM

Sorry, I should have limited my comments to what I observed in the television program.

I didn't realize you were running 15K drives in a RAID array. (This is what happens when one makes snap judements without reading an entire thread).

I haven't kept up with drive capabilities but looking at the Cheeta X15 data sheet I see the internal transfer rate is 38.0 to 48.9 MB/sec. (I believe this means the sustainable rate). Plus you have two of them. Obviously, HD transfer rate is not your problem.

I take back what I said before (LOL). I have no idea what is limiting your network speed.


<b>56K, slow and steady does not win the race on internet!</b>
December 20, 2003 2:55:34 AM

I too, make assumptions... then then fun begins..

internal transfer rate is 49-75 mb/sec on my spec sheet, which I just found.

I have no idea yet, but I worry things like a dog with a bone, so I will figure it out when I get a few days to really tear into it and experiment... I hope ;-) ;-)

Riser said the MB is the problem, and maybe he's right

I'm at a loss right now.



.
.
-=ed
December 20, 2003 3:13:26 AM

49-75? I found the pdf that says the same thing but the oline data sheet said 38.0 to whatever I typed.

Oh well, either way it doesn't explain the lack of performance you are getting.

I'm sure you'll find the problem.

<b>56K, slow and steady does not win the race on internet!</b>
January 5, 2004 2:30:15 PM

Just a quick note. If you have Gigabit Ethernet you have a teoretical limit of 125 Megabytes/Second (Byte = 8 bits) for half duplex and 250 for full duplex. However many many things affect the real throughput for a network. First all the bottlenecks in the computer whatever they may be, then the load on the computer while running your applications + file transfer. A cable with a kink in it will also slow things down, as will interferences from regular electrical equipment. After all is said and done you'll reach a realistic speed of 60-80% of the theoretical speed, with 80% being almost perfect. You are at about 50% with 65MB/s, which means that you have some bottleneck(s) somewhere, but it is still a decent performance and finding the bottleneck might be hard.

Dev

---
My Sig:
))
(( ___________________
|||_____________|_____|
January 10, 2004 6:34:27 PM

That's what I thought but EdtheROM stated that Passmark is reporting 64 Mbit/sec transfers not 64 MByte/sec transfers for his network.


<b>56K, slow and steady does not win the race on internet!</b>
January 13, 2004 11:43:16 AM

Oops, I didn't bother to read all the long posts (nor will I), but my first guess is bad cable. If the cable is known good, make sure that the cable is not running right next to some electric(al) equipment such as TV, Fridge, Microwave, or anything else with an electrical field. Thirdly make sure that all the connections are good between NIC & cable and cable & switch/router.

If that doesn't help more serious troubleshooting is required.

Dev

---
My Sig:
))
(( ___________________
|||_____________|_____|
January 30, 2004 1:10:56 PM

Well I really fell out of this thread because of working longer hours and all.
In reading the latest replies I have to "read the damn posts" haha. With some of the questions being asked, you're running Ed around in circles by asking questions he's already answered.

I looked up the motherboards he has running. They're good boards but for the hardware he has in his machines, they're obviously not up to par. Internal transfers rates are fine because he's transferring between two SCSI drives, etc.
The motherboards he is using really weren't designed to work on gigabit. My thought is that the motherboards were tweaked to work well with 100mbit connections without the thought of a gigabit card being put into it. I believe the boards are 2+ years old if I remember what I looked up. Everything he has in his computer is designed to compliment and work with everything else, except his motherboard. It's the only thing that doesn't fit into the equation.
Hard drive? Top of the line.
Controller cards? very good ones.
CPU: good enough
Memory: can't go wrong with what he has
power supply, blah blah.. good.
motherboard? older, not a high quality part by today's standards.

It's going to take some research on his part to find a motherboard he likes that promises the performance he wants. It's the only thing that can limit his performances to what he is getting.

Riser
February 1, 2004 12:23:20 AM

I also dropped out after I tried to help someone and got flamed by some idiot who could barely spell, and blasted me for not reading the whole post. Actually, he didn't understand the post, so I just decided that it wasn't worth it.

however, as I said before...

"I'll stay with dual processors, though I may see how a single 3+ghz one will work with some of it."

Sooooo, instead of waitingfor April or so, I bought an ASUS P4C800-E Delux".

A seemingly nice board, and with the recent price cuts, I was able to get a package with a 3.2g 512kb cache P4cpu [a splurge, I know, but it was only a hundred more than the 2.8ghz], 1gb of cas3 ecc, and a custom cooler, assembled and burned in for around $850. I suppose I could have gotten it cheaper, but it's a good place to buy from. As usual, you get better support from a place with a little better profit margin.

What are your comments on this MB?? Should I have done something different??? In your opinion??

I am putting it in a box with a 3-drive u160 15k 36g seagate cheetas, and 4 - 73gb fujis on the same old 3200s.
I will also put 2 - 120g/7200 WD drives on the MB's raid chip, and test the ICH5R southgate sata raid with some new drives.

I will try the PRO/1000nic board, but the MB offers an onboard Intel PRO/1000 on an internal buss called the CSA, which is only for the nic, and unloads the PCI buss.

This should test your contention that it may be the older boards that are the bottleneck. If this doesn't pickup the xfer rate, I'll have to look at the switch.

Reading somewhere else on the board, I thought than the idea of setting max info transfer rate at 1000/8 = 125mb/sec is a little simplistic. Packet size alone causes the efficiency to go all over the place, as the data is surrounded by the overhead of header and closing overhead data from each of the tcp/ip stack's layers. This won't help in calculating the actual DATA transferred, just the raw bits. I'm not sure whether these test programs calculate the bits flowing or the data delivered down and up the tcp/ip stack, and what they may or may not do with blocksize. I must admit that I'm still not too comfortable with these programs, but it's looks like I'll have to learn.

I would like to see what blocksize is being used and what else is on the buss of another machine, along with boards xfer specs

Maybe I will figure it out yet... ;-)


thanks for the words.

Anyone else have some thoughts...???




BTW, I've had a few people who said they had something to say, but didn't want to read the whole post. How can they expect... Oh, never mind, it takes all kinds, I guess...

I hope those who are alphabetically challenged will try harder to read the whole message before offering opinions.
If so, I'll refrain from commenting on your abilities and lifestyle, which I gleaned from reading your name on YOUR message.

[{a little humor there...}]




.
.
-=ed
March 19, 2004 5:53:12 AM

Just a note, I also get 36-ish MB/sec between my two computers, using SiSoft Sandra, and it does not access the hard drives for the test. At least there is no seeking going on when I hit "refresh results".

Even if it did use hard drive transfer, both computers have 3-striped IDE Raid. I'm pretty sure it's the NICs here. I have Linksys EG1032 V2 Instant Gigabit cards. I even eliminated the gigabit switch entirely, and ran a cable straight from one PC to the other, no change. The cable is Cat6, better than the required Cat5e. Even tweaking the advanced options for the NIC itself, its max. allowed IRQ per second, read/write buffers, etc. didn't change results. I'll have to try this with a large file, and see if I get a result closer to the weird 36MB/s network cap, or closer to the RAID drive's cap of 100MB/sec.
March 21, 2004 4:32:09 AM

So you guys are blaming the motherboard for the gigabit card's transfer rate? I thought PCI bus was able to transfer upwards of 100MB/sec? The gigabit card would come close to the maximum bandwidth, but I wouldn't think it would be PCI bus limited.

My cards max out at 37 MB/sec, even when they are connected directly together with nothing but a single cable, two manually assigned IP addresses, no firewalls or anything installed. The PCI bus should be able to handle more than three times this bandwidth.
December 29, 2005 3:24:22 AM

I manage approximately 80 servers with almost all them using onboard gigabit nic's. These servers are mostly Compaq Proliant DL360's and DL380's G4 series, the on board NIC's are actually broadcom. Theyre all connected through Dell gigabit switches with CAT5E or CAT6 cable and I typically see 60-90 Megabytes/sec of throughput depending on the server and type of data being transferred.
We also have many dell P4 desktop PC's which we have installed Intel Pro 1000 PCI nic's into that are able to achieve nearly 60 MB/sec throughput. Though the CPU usage on these desktops can get very high when transferring a lot of data, easily hitting 50%-60% CPU utilization. This is because the PCI bus quickly becomes saturated. The Compaq servers use Intel CSA and thus the data is able to bypass the PCI bus and go straight to the memory controller of the motherboard. This is not as much of a problem with the newer motherboards that use PCI-E since the PCI-E bus has a lot more bandwidth than PCI. If you have servers/workstations that have PCI-X slots I would definetly go that route as it also provides a lot more bandwidth than PCI.

On a smaller network I would definetly look at using Jumbo frames this will allow TCP/IP frames of a much larger size. By cutting down on the TCP/IP overhead you will be able to transfer more data faster. The standard frame size today is 1500 bytes. jumbo frames will allow you to go up to 9000 bytes. You would see a huge speed increase over standard gigabit. Because of the hardware compatability issues involved with using jumbo frames (NIC's, Switches, etc. have to support each other) and lack of standards it is difficult to implement this in a larger network.

Even with your hardware you should see significantly more throughput than the 6-8MB that you stated in your earlier post.
I would look at your switch as being the most likely culprit. If you don't have access to another gigabit switch you could try connecting two systems with a crossover cable and running some throughput tests.
January 20, 2006 3:42:11 PM

I'm just doing some tweaking on my own Gbit network , and have run into similar problems.Here's what I've discovered ;

If you want to eliminate the harddrives as a source of 'lag' try running a ramdrive to ramdrive transfer.

We use our network for LANning , and I've found the avg decent speed is about 35 Mbyte/sec. Sometimes it's faster , around 50 Mbyte/sec , sometimes it's a lot slower , around 10 Mbyte/sec.My current machine is an excellent example : at our last lan , It would do 35 Mbyte/sec to 2 of 12 people.The rest got 10Mbyte/sec.
The best theory I have so far is that it's some sort of NIC to NIC issue : misaligned settings or something.

.....testing continues :D 
February 27, 2006 12:48:08 AM

You boys are very entertaining, but could you help me? How do you change the font color in MS word? :lol:  :lol:  :lol:  :lol:  :lol: 
March 2, 2006 3:24:27 AM

You should be getting better than 7%. I am using gigabit lan at home and usually on file transfers from computer A to B, I get about 30-35% efficiency on average.
March 6, 2006 7:39:02 AM

Quote:
You should be getting better than 7%. I am using gigabit lan at home and usually on file transfers from computer A to B, I get about 30-35% efficiency on average.


Ditto. I'm using jumbo-frame-free built-in NICs via a consumer switch, and get around 30 MB/s file transfer speed (e.g. 1.25 GB @ ~ 8 MB average file size in 37s) from IDE to IDE. When it's cached and initiated from the source side, I get 50-60 MB/s.

Quote:
Sounds about right.
Gigabit really isn't meant for connections to computers, but mainly between servers and backbones.
You won't transfer that much because your computer won't be able to process that much information.

Let me explain it this way:
I assume you transferred a fairly large file from computer A to computer B, hard drive to hard drive right?

I'm guessing you're using a standard IDE hard drive, ATA66/100/133 with a 2MB buffer on that HDD.
You've reached the maximum read/write ability of your hard drive which is limiting your connection speed.

Understand now why gigabit to each computer isn't worthwhile?


No.
March 17, 2006 2:36:43 PM

How many packets are these cards doing per interrupt they generate ?

Most GbE cards have a setting to raise this, and the ceiling raises with it.

(This thread is dead perhaps, but there are recent replies to it).

Using RAM Drives to benchmark is good in theory, but you'll see HDD --- NIC --- NETWORK --- NIC --- HDD copies outperform them for various reasons I won't go into.
April 4, 2006 4:54:45 AM

That definately seems off. My LAN seems to run about at the norm. Which is around 30-35% network utilization on a gigabit lan. That would be about 300-350mbps
April 6, 2006 11:20:09 PM

woops, old thread
April 7, 2006 7:25:19 PM

Sorry to post yet again to this old thread, but I think that the OP is gone, and has been gone for years, so this is pointless -- no progress is possible on the OP's problem because he would have to contribute to its diagnosis. Hopefully he's figured it out for himself; he's had long enough.
July 6, 2006 3:30:11 AM

Quote:

Dual Processors are only good if the software you're running supports them. It doesn't make everything run better. I'm not aware if you know that or not, but unless you're certain the programs you're running support dual CPUs, it won't do you any good.
...

As far as IDE to SCSI. IDE and SCSI will both fail at close to same rates. SCSI is only better when you're running a hardware based mirror/RAID on a server. Other than that SCSI really doesn't give you much else except for a slight increase in speed, but with your current motherboards you won't notice. You're driving a car that can go 0-100mph in 6 second flat, but you'll never take the car faster than 20mph. Get the idea?
...
I've covered why your adaptec controllers (which are really just PCI cards that allow you to use SCSI over IDE, nothing really fancy) probably won't do what you may think they will do in the case of a drive failing.
...
My advice:
Drop the SCSI drives on your workstations. Find a motherboard that has built in RAID support, get 2 IDE drives run RAID1 (striping) for the best performance, but no data redundancy there. Get something with a higher FSB too. Your workstations will be fine.

As far as your servers, depending on how large the files are that you're dealing with you might be fine if you can deal with the performance of your gigabit lan not being up to par.
If you want to utilize the gigabit lan, you'll need to investigate proper hardware to increase that connection.

Contact a sales rep. at Graybar, Dell, or a local consulting firm to find out what they recommend.
Better yet, you may want to bring in an outside consultant to tell you what you need to get your network running extremely efficient. You'll pay for it, but from what I can tell you're so far out in left field with your range of hardware, you'll be saving money by getting the on-site advice of someone with strong experience in networks.
Riser


There are so many technical errors in Riser's posts my eyes are bleeding.

First
Dual processors, and dual core processors are almost always faster for standard users because of the number of simultaneous applications being used. Advanced users can also use processor affinity to bind applications to a specific core or processor. Most benchmarks don't run in the typical environment home users and power users have - meaning large numbers of applications simultaneous. On average, I keep open 10 applications at any one time.

Second
IDE and SCSI have completely different MTBFs. SCSI drives have always been more reliable, more high performance, and more expensive. My old scsi drives (10K rpm) from the late 90s are far faster than my WD raptors which also operate at 10k rpm. SCSI shines in moving small files too. SCSI performance is far superior to ATA/SATA, end of story. Read some benchmarks if you want to know more.

Third
Raid 0 is striping, not raid 1. Raid 1 is mirroring aka duplexing.

Fourth
Raid support onboard is not superior to raid in a PCI or PCI-E card. The mobo is the single point of failure there, if it goes down, everything goes. If the card is seperate, if the card or the mobo go down, it is easier to replace. Also, a card can be replaced much easier, or upgraded.
!