gigabit efficiency question

EdtheROM

Distinguished
Nov 21, 2003
30
0
18,530
How efficient should a decent gigabit net be??

I am missing something, I'm afraid, as I only get about 64 mb/sec, or around 6-8% effiency, unless I missed a power of ten somewhere

I used the PASSMARK network test and got these figures for 2 machines [cat 6 cable, cpu usage approx 35% for each] over 30 seconds.

I have 2 intel pro/mt1000's, through a trendware gigabit switch on a network with only these on it.

I have offloaded all calculations/checks as possible, and tried jumbo frames.

It doesn't seem that about 7% efficiency is correct.

any ideas what I'm doing wrong, or a better way of checking??

It certainly seems a lot faster, but I must be screwing up my math somewhere

.
.
-=ed
 

riser

Illustrious
Sounds about right.
Gigabit really isn't meant for connections to computers, but mainly between servers and backbones.
You won't transfer that much because your computer won't be able to process that much information.

Let me explain it this way:
I assume you transferred a fairly large file from computer A to computer B, hard drive to hard drive right?

I'm guessing you're using a standard IDE hard drive, ATA66/100/133 with a 2MB buffer on that HDD.
You've reached the maximum read/write ability of your hard drive which is limiting your connection speed.

Understand now why gigabit to each computer isn't worthwhile?
Now, between two SCSI UltraWide 160 drives running at 10,000RPM you'll have a different story, which are almost exclusively used in servers.

Riser

<P ID="edit"><FONT SIZE=-1><EM>Edited by riser on 12/08/03 06:58 PM.</EM></FONT></P>
 

EdtheROM

Distinguished
Nov 21, 2003
30
0
18,530
Yes, disk to disk, but not just any disks.

10k scsi hd to 10k and 15k scsi hd, 2chnl raid0 to 4chnl raid5 u160/320 scsi drives. The drives are all 8 or 16mb cache, the controllers are adaptec 33mhz/64bit 3200/3400 with 128mb of ecc cache.

I really don't think there is a disk subsystem bottleneck.

IDE stuff is pretty poor, even the new sata stuff, so I don't use them, except for some static stuff that's nice to have online.

So all my 6 workstations and 2 servers have scsi raid systems, except for 2 Compaqs that are mostly internet machines

I've been doing this for a long time, and have been pretty good at figuring out what's worthwhile, and my 8 computer development net hums along quite nicely. The GB network has been a big help in moving things around, doing timely backups [I won't buy any more tapes, as they are always behind the curve, unless you want to spend 5k for a setup that may be a pain to use in a year, unless you get an auto loader, which are touchy and expensive to fix.

Assumption is the mother of all screwups...;-) ;-) ;-);-) ;-) ;-);-) ;-)

So, is there any place that can give me some facts on how to optimize a gigabit network that has a data system that can deliver the bits to the nic??

thanks for the words, and any help or direction you can give.



.
.
-=ed
 

riser

Illustrious
There isn't much to optimizing your gigabit connection that would make a huge difference.
You haven't listed your OS or system specs besides your controllers and drives. It really sounds like you're just throwing things out there and have no clue what you're really talking about.
Your systems are the bottlenecks in the efficienty of your network. You may have the controllers to handle the throughput, but your systems bus won't be able to handle that much information which causes the bottleneck.

Its odd for someone to also call IDE equipment "poor" because its main stream, heavily used and the only time you see SCSI is in a powerful workstation (ex. Rendering machines) or servers. You very rarely if at all see SCSI in your standard workstations. On top of that, SATA wasn't designed to give an immediate improvement over IDE hard drives. Its a serial cable afterall.

What are your system specs?
Mainboards, bus speeds, memory configuration, etc.

RAID 5 configurations are considerably slower than the RAID 0 which can also limit your network.

As for tapes, SDLT tape will allow you write 200/400GB compressed/uncompressed in around an hour or so depending on your network configuration.

It honestly sounds like you have no clue what you're talking about and seem to be here for bragging rights alone.

If your systems can handle what gigabit offers, you're ahead of 95% of businesses and the like even with a handful of systems.

Maybe your gigabit switch is the problem?
Is it a managed switch?
Have you set your ports?
Auto Negotiation doesn't always work as stated.
Have you set your NICs to full duplex, 1000Mb?
What distance are you running your cables?
Have you certified your cable or tested them with a cable tester to verify they can even do gigabit? I understand you're running cat6, but that doesn't mean you're guarenteed gigabit.

What I'm saying is that if you'd like help in this forum, you'll need to list more information about your systems and your setup so we can get an understanding of what is going on and potential problems.

Riser
 

EdtheROM

Distinguished
Nov 21, 2003
30
0
18,530
I am sorry if there was an unfortunate tone to my my post, but *I* thought it was dryly humorous. I was chuckling when I read your first reply. I wasn't trying to be arrogant or such with my reply.

I have upgraded the network and the jump to gigabit was only a little more for the size [***4 various workstations; 2-tyan s2460, 1 IWILL KK266r, 1-IWILL KK266plus-r w/ specialty printer and scanner, serverIWILL KK266plus-r,Shuttle bkp server w/ fax and static data storage, 2-compaq 933mhz pIIIs***, 2-networked laserprinters] running W2k.SP3, both server and workstation. Nice and simple.

It now certainly seems significantly faster, and I am trying to find what transfer rates a gbLAN can provide in theory, and what you can actually expect to see. I ran some tests that gave something like 6.5-8mb/sec. I think that I should see something better, but I have no reference to draw any conclusions. I do not get any errors or dropped packets reported during transfer, so maybe something is set wrong, or two settings are fighting each other. My disk subsystems can deliver substantially better rates than this. I want to see where this system stands in efficiency of transfer, and maybe tweak it best I can...

I tested thru an inexpensive trendware 8-port switch, [with an 8-port linksys 10/100 switch for printers, laptops, etc] with 2 tyan s2460 w/2 mp1800,1gb.eccddr memory, 4chnl adapatec 3400s w/ 128ecc cache, 2-15k36gb.R0 for system stuff and 5-10k36gb.r5 data arrays and INTEL pro/1000MT. They were the only thing running on the switch during the test.

I have not tested the cables, as I cannot easily get the equipment to do so any more. However. the INTEL software says it is doing full-duplex gigabit at rated speed, and seems to get the right cable length. Four are connected with 25ft cables, including the two tyans used for testing, one is 75 ft, and one is 100ft, all CAT6. The switch is unmanaged, as the idea was to do this as inexpensivly as possible, and the price differential was very substantial. Maybe I need a better switch. If so, I still have the decent nics. How would you suggest I test for auto negotiation?? It is certainly faster than 10/100. Transfer a big file?

Sorry if I sound like I have no clue... I don't usually get accused of being an idiot. I have substantial publications, professional credentials, certifications and affiliations, and listed many of them at first, but I removed them when I realized I was getting caught up in this.

However, I will say I have [ghost]written a few chapters here and there in programming books, edited several specialty computer books for publishers like McGraw Hill, and was involved with the design and programming of two commercial packages, in Fortran and PASCAL, no less. I also have numerous published popular articles and a few techies, and had my own forum on tech matters on Ilink . Much of this was some time ago, but at the present, I help non-profits, teach disabled vets, write, and maintain this system.

Perhaps, in a rush, I came across differently than I had planned. I may not have a good grasp of the customs and protocols here. Again, sorry.

In the end, I'm just looking for ballpark figures of what to expect, and maybe some advice on what to try to improve things if possible, within the constraints of what is available to work with. I was hoping to find someone with some experience in ths situation that could help me.


** additionally, to comment on your observations:


I am not sure of what you mean by "setting my ports". Could you clarify? TCP/IP ??

I don't like any IDE stuff, because, in my experience, it constantly stalls the system on access, under heavy load and with large files, even with dma. It sometimes fights with SCSIdisks because of automatic drive-letter assignments, and seems to fail more often under continous duty, and usually has a poor warranty.

Hardware SCSI Raid5 arrays, hardware XOR, and large controller CACHE can effectively READ very quickly, and write only a little slower. And you've got to love the combination of speed, fault tolerance and storage efficiency it gives.

At the present time... In my opinion... With things the way they seem to me...





.
.
-=ed
 

riser

Illustrious
If you're looking to get a transfer rate of 200MB/s, you won't get it.
You're switch can be a problem because it may only be accepting your gigabit connection at 100Mb, even if its stating you're at 1GB.
Setting you ports to full duplex, 1000Mbit:
Device Manager - Properties on your NIC - possibly an Advanced tab will allow you to hard set your NIC. More than likely its set to Hardware Default or Auto Negotiate.
You might want to bypass your switch to find out if that is your limiting factor. By being unmanaged it and more of an inexpensive piece of hardware, it might well be your switch. Use a crossover cable and see if your data transfer rates change.

Ok I just looked up your tyan s2460 motherboard. You're limiting yourself to a 266Mhz FSB which is designed to write to ATA100.
Your system setup should be awesome for doing work on the PC with that motherboard, but its not designed for gigabit networking. Your motherboard is probably a strong factor in your reduced transfer speeds.

Your data is coming in through your PC, going through you motherboard to your Adaptec RAID controllers then finally passing on to your SCSI drives. Everything you have is of a good quality, but your motherboad seems to be lacking for what you're requesting it to do.

I would look up a highend motherboard that boasts gigabit networking capabilites and compare.

I just read up on your motherboards and it looks like your current systems are relatively new but weren't designed with gigabit in mind or the near future.

Here is a quote from a Tom's Hardware article:

"Power users should keep Gigabit Ethernet in mind. While most controllers are able to transfer more than 100 MB/s, the PCI bus will quickly slow them down, as soon as it has to share the total bandwidth, 132 MB/s at 33 MHz, with other devices (which actually happens frequently). "

http://www6.tomshardware.com/motherboard/20030929/ddr400-02.html

Not being familiar with your adaptec cards and the complete function, do they bypass your southbridge or do they still need to get processed there? If they do, I'm sure that will result in another bottleneck.

--
I was up late last night and I had put in an 80+ hour week last week. I'm still feeling the effects of it but I picked up on the wrong tone from your message. I apologize if my reply came back to you in any way other than helpful.

Riser
 

EdtheROM

Distinguished
Nov 21, 2003
30
0
18,530
what did you mean by "setting the ports"?

What current dual processor boards do you recommend ?

I used to prefer AMD, but the heat output is *phenomenal*, and fan noise becomes a problem. I have the servers downstairs with two other machines running on a 4 device kvm cat5 extender to keep the noise and heat away from my office. I feel that Intel cpu's are generally quieter and seem to be a little cooler, but what's current may be completely different.

I am planning to replace my current tyan with a contemporary board, and roll each "downhill", so do you have a feel for what's going to be the hot MB, with enough headroom to handle Gigabit, say, this Febuary??

Any great technical innovations due, of the kind that will extend the life and enhance the networking ability of my present system?

thanks for the reply



Can't keep up with this, so I do it every year or two.

.
.
-=ed
 

riser

Illustrious
Configure your NIC's Media type. Set it to 100 Full Duplex.

I would recommend a motherboard that has a faster FSB and maybe built in highpoint controllers to increase speed. I didn't spec any out, I haven't had the time as I've been working on my own new problem.

I have an AMD +1800 and the temperature runs at around 90 degrees, up to 120 under full load. My pentium 4 2.8GHz run a lot higher under idle conditions with extreme air cooling.
As far as fan noise, most coolers work for both AMD and Intel. I personally use a 50 dollar (at the time of purchase) Antec (can't think of the name, wind tunnel or something like that) fan with a copper core. Its extremely quiet and I found it on discount due to overstock for 20 dollars. I don't like using cheap fans to protect my CPU and motherboard from damage. I just had someone use a cooler designed for a 900MHz on a AMD +2000 which needless to say shorted out his motherboard, I haven't had a chance to check if the CPU protected itself yet though. I'm hoping it did, but I'm highly doubtful.

You'll want a chipset like the Nforce2 400MHz FSB where it can use the data throughput. You're getting the data to your NIC, but it can't put it anywhere until the bus opens up to allow it to pass the data on. Since you've invested in gigabit technology, you'll need to invest in a new motherboard with gigabit.

On top of that I found out today that my Dell's with integrated 10/100/100 Intel NICs won't boot to network with PXE because they have problems with CISCO switches, so I had to use a Dell PowerConnect to boot. Brand new technology doesn't work with the most popular switches.
It took the Dell Gold Support rep almost an hour to figure it out using all of their resources and calling their actually engineers to find the answer for me.

Riser
 

EdtheROM

Distinguished
Nov 21, 2003
30
0
18,530
OK...

So , with reliability and stability more important than overclocking, etc, and on GBnic not necessary, since I have a 10-pak of pro 1000/mts, what's the 7/9 sweet point MB/cpu combo that you would reccommend??

I prefer quality over flash. I like ecc if available, but this is not a deal breaker.

..


.
.
-=ed
 

riser

Illustrious
I guess I still don't understand why you're hooked on ECC memory? Its only used in servers yet you're looking to use it on your workstations? ECC is slower than Non-ECC.
I would pick any motherboard with a faster FSB. I wouldn't do a Xeon unless you're running it in your server, likewise with the opteron. Athlon 64 is out of the question too.

Before I could even guess what kind of motherboard you would want, I'd have to know what you're doing with your network.

I work at an A&E firm where Gigabit is first expected to take off, but with a good mix of computers and servers, we don't even need to touch it.

It sounds like you're just sticking with the top of the line stuff and you want all of you computers to be servers. Thats why you have Servers and Workstations. Each one has its own performance factors.

You're limiting factor is the FSB. a 866MHz/133FSB is faster than a 933MHz/100FSB. Why? because the FSB can push more.
P4 Dual Processors? Completely overkill unless you're doing 3-D modeling in Autocad ADT. Its close to overkill even for VB Studio and .Net work.
I think you should go back and take a look at what you need your computers to do and build them to spec that way and not build them like each one is a server. Just because you have everything designed like a server but using is as a workstation, doesn't mean its going to run good. It will probably be out performed by any comparible workstation.

Riser
 

EdtheROM

Distinguished
Nov 21, 2003
30
0
18,530
A first question: How do you get the message text in the reply?? Other forums have a "quote" button that does this.

I like the security of ecc, and I only have to have one type of memory around. My computer background cultivated overcompensation. Ecc is only slower on error conditions. During non-error operations, there is no difference for the same specs. A locked system can cost irreplacable data.

All my MBs are 133 fsb except for my bkp server, which is a legacy 1600xp w/ 512mb, fax, ide r0, on a shuttle GAX, which is all that system needs.

I like duals because they are smooth, and don't slow down near as noticably when doing several heavy duty tasks at one time, especially backups, data crunching and compiles.
SCSI also contributes to a smooth running under heavy load, especially with heavy i/o. I hate systems that "stick" or "stall".

I just would rather spend an extra $500 to give me, and myself and those who work for me, the best possible "experience", as work time is valuable, at least to me, and anything that intrudes on a train of thought could have significant consequences for my organization. I find that top-of-the-line equipment often give the same highest level of reliability and performance. My main server runs 6-9 months without needing to be shut down except for upgrades. I like this, and want the machines that are used everyday as reliable. Unfortunately, I'm locked in with WinIntel, so I have to be fanatical.

I think Andy Warhol said "... just because you're paranoid doesn't mean they aren't after you..." ;-)

The network allows data to be transferred in a timely manner, in the most transparent way possible. I use "synchronized" folders to automatically update folders on each machine to 2 file servers, and to other relevent machines. Content is what we do... we analyze often originate very special forms of data with proprietary software. I like to backup everytime something changes, and even though we are small enough to do this, this needs bandwidth. Things are getting out of hand again, and things have to be upgraded. The network is first, and then the computing hardware to match the network and computing needs.

I'll stay with u160 r0 and r5 for workstation system and storage, and mirrored ide for a local backup to be synchronized to the servers whenever things are slack. This is the safest system I can afford that gives me the flexibility I need.

BTW, Visual Studio responds well to dual processors.

I don't feel that they are all built like servers. They are all built to be as RELIABLE as servers, with differing specifications as to storage and memory size. I have been in this dogpatch for a long time, and I have lost too much original work as it is, and have decided to prevent as best as possible within spending limits. You are not the first to voice an opinion as you have, and it is probably good to hear it again, to keep me "grounded".

I am not in a race, I just want a fast reliable machine, and now I want to learn how to maximize the gigabit interface with them, to further the reliability of the system to safeguard our workproduct.

Iam leaning to upgrading all the motherboards except the main server as you have suggested. I will be looking at Tyan or some other quality dual cpu MBs for the 4 workstations, and roll the replaced MBs down to the backup server, and maybe an experimental unit for trying out software on my lan without worry, and maybe to my son and wife.

I have looked into it, and will probably wait until March when several speed enhancements will probably be at the 1.1 level, and start the upgrade. I'll be able to put aside funds to do it at one time by then without discomfoft. I hope to learn more in that time and make a optimal decision.

I thank you for your information, and will hope to continue this discussion, as you seem pretty straightforward about what you think, and why. Very valueable type of information.

hope you, or others if they wish, reply



.
.
-=ed
 

riser

Illustrious
I don't have time for a long post but I'll sum a few things up.

Dual Processors are only good if the software you're running supports them. It doesn't make everything run better. I'm not aware if you know that or not, but unless you're certain the programs you're running support dual CPUs, it won't do you any good.

As far as your adaptec contollers, if I recall those only support software mirrors and RAID5s not not hardware based.
Meaning you had to install your OS then you had to configure your mirror/RAID through Windows instead of creating your mirror/RAID then installing Windows?

If that's the case you won't get what you're expecting from it.
If your main hard drive holding your OS would fail, your software mirror/raid would probably be lost because your OS was controlling it. Sure you can try to rebuild/recover the RAID after reinstalling Windows, but have you tried this? It really doesn't work all that well, if at all. If software is the case, you might want to test this out before something serious happens.

As far as IDE to SCSI. IDE and SCSI will both fail at close to same rates. SCSI is only better when you're running a hardware based mirror/RAID on a server. Other than that SCSI really doesn't give you much else except for a slight increase in speed, but with your current motherboards you won't notice. You're driving a car that can go 0-100mph in 6 second flat, but you'll never take the car faster than 20mph. Get the idea?

More is not better in this field.

Now, since you're running servers, standard practice is to back up your servers and not your workstations. Considering you have a small amount of workstations you might very well backup each computer. But, you're running servers so it's safe to say you're reading files off the server and not copying them down to the local machine to work on them. I mean, if you are you're defeating the purpose of your backup and risk possibility of information be overwritten. Which brings me to still not understanding why you have so much money invested in your workstations.

I've covered why your adaptec controllers (which are really just PCI cards that allow you to use SCSI over IDE, nothing really fancy) probably won't do what you may think they will do in the case of a drive failing.

Just to mention this, at work we're running a PERC-4 RAID controller on one of our servers and manage to pull roughly +700Mb from server to server.

My advice:

Drop the SCSI drives on your workstations. Find a motherboard that has built in RAID support, get 2 IDE drives run RAID1 (striping) for the best performance, but no data redundancy there. Get something with a higher FSB too. Your workstations will be fine.

As far as your servers, depending on how large the files are that you're dealing with you might be fine if you can deal with the performance of your gigabit lan not being up to par.
If you want to utilize the gigabit lan, you'll need to investigate proper hardware to increase that connection.

Contact a sales rep. at Graybar, Dell, or a local consulting firm to find out what they recommend.
Better yet, you may want to bring in an outside consultant to tell you what you need to get your network running extremely efficient. You'll pay for it, but from what I can tell you're so far out in left field with your range of hardware, you'll be saving money by getting the on-site advice of someone with strong experience in networks.


Riser
 

EdtheROM

Distinguished
Nov 21, 2003
30
0
18,530
I read your recent reply with interest. I will reply to your observations in order, but because I can't get a convenient "quote" function, this may seem a little disjoint, but it isn't.

I know that this doesn't apply to much of the readership, but I think the basic line of though - the ideas - may be of some interest to those who would at least like to think about this type of system. I don't like the money I've had to spend, but it has paid me back, with the uptime I need without problems. All this is, of course, my opinion, but I feel it is an informed opinion from 25 plus years of experience, some associated with amazing screwups that embarass me to this day.

I've used dual processors since 1994, starting with the AMI Titian3 board, and have always enjoyed the improved response. The separation of most i/o functions in W2K
makes it even more effective, and as Mark Minasi noted in one of his seminars, W2K is really designed to use SCSI..." ide just drags it down. Some of my software also uses it, so I'll stay with dual processors, though I may see how a single 3+ghz one will work with some of it.

With dual processors, W2K seems to divide up the basic tasks, ands you are allowed to assign certain tasks to certain processors. I have found, by default, at least in the machines I have used, I/O is on one processor, and the apps run on the other [along with certain services], as shown on the performance tab of the task manager. This makes a Lot more efficient use of the 2 cpus, as the response on the GUI is much smoother, especially with hardware [or, to a lesser but certainly noticable extent, software] raid. It greatly improves software raid, which has it place in certain situations. Interupts can be handled while the rest of the machine hums smoothly along. This might be attributable to the programs I tend to run, but I've seen at least some of it on ALL the dual cpu machines I've used. Oddly enough, Most of these machines had some form of scsi on them, with that wonderful disconnect function.

The difference is readily noticed when heavy network or file i/o is going on while some app is working, and disk i/o is occurring. A situation common on my system.

The O/S overhead does not interfere with the running apps. SCSI is important because it doesn't hold the buss while waiting for the drive to deliver the data. Along with user defined readahead, a 128mb cache and other features in the onboard the controller [technically a host adapter, but I use controller because its easier], reads and writes ar handled very efficiently. The fastest ide drives stll have the habit of holding the buss, and stop everything else. IDE raid may be ok for somethings, but this buss siezure situation, their basically inefficient handling of computer rosources, along with the fact that they are not designed for 24/7 operation leads me to think that I will keep my hardware scsi systems on my workstation, as I do ECC on my MB.

I don't understand what you were trying to say about the adaptec controllers. I configure my raid setup thru a configuration utility on the board which comes up with a <ctrl>A at bootup - NOTHING to do with the O/S. There is a windows app, but I just use that to see what's going on with the system when I'm up.

What do you THINK I'm expecting from my system?? I get lighting fast loads from the mirrored 8mbcache 15k u160 drives, along with the reliability of duplexed mirrored drives [1 drive on each channel]. WHAT really doesn't work "...all that well"? It works very well, as far as I have experienced. My son's newish game machine is outrun by my 2-year old clunker.

As for scsi speed, you are missing one point - SCSI is a SYSTEM, drives and controller. The controller, as mentioned above, provides much of the performance, besides the speed of the drives. I notice a "speed increase" because of the on-board cache, transferring at whatever speed the buss can handle, and it can usually handle a two drives at or a little below rated speed. Read ahead often
provide the next read in the cache, further reducing the time necessary to access the requested data, all without locking up the buss for the duration. 128mb of ecc holds a lot of data. Get the idea ?? ;-)

More may not be enough, but a little better than enough gives you some headroom, and allows you to better maximize whatever "juice" your system has. However, in cost-sensitive systems, it isn't practical. A scsi system disk might be effective, however. I find this not to be true on systems with IDE, so it might not work. When my ide DVD burner was running, the system slowed down noticeably. I now have all this on a separate little machine.

As to servers/workstation and backup, it depends what you want your system to do. You are assuming a lot as to what my system does and what will maximize the features that I find more important. You do those first, and do our best with the rest after they have been attended to. I have invested in the workstations because that's where the work is done, and my work is both processor and disk intensive,
along with the fact that it's often very costly to replace, if it can be replaced at all.

You are trying to put your idea of how the out-of-the-box server/client model should be implemented over my workspace. A rigid adherance to any one model prevents you from maximizing the reliability and efficiency of the system.

IMHO.

Why won't my controllers do what I think when a drive fails?? They always have . What do YOU think they should do, and what do you think I expect? Again, why you keep saying that IDE is better than a scsi subsystem is escaping me. I just don't understand how you think any ide system ll be faster than a 15k U160 raid5 on read, which is most of what my system's workstations do. Writes aren't all that slow, either, with another built-in function of scsi controllers, command queing.

I definitely need a faster MB, and I will address that early this spring. Things in the network world are no longer simple, so I have to go somewhere and learn a lot about new technology I will work on this.

I WAS the outside consultant you talk about, and I was employed by several F500 companies and the government. That was some time ago, but though the hardware changes, the underlying concepts remain pretty much the same. Being semi-retired, I will have the time.

My hardware isn't out in "left field" - it will deliver more efficient and reliable operation [except for the lan upgrade, which is why I came here to ask for some help] than the standard system you are recommending. You are making a common mistake - assuming one size fits all.

I have found this not to be true. A judicious selection of equipment can fit much better if you can think in a creative manner and select the from the features available to achieve you goal. I hate the phrase, but "...outside the box" sums it up.

And there another aphorism that I feel appies - "you get what you pay for", usually, if you are informed when you shop. Most readers here know this, and I'm sure usually try to maximize their system with what they can afford.

Hope this makes my ideas clear.

IMHO, FWIW, YMMV [for sure]

Got to go now, data to crunch, MB's to upgrade, networks to learn about.

Thanks , sorry about the disagreement, and the poor spelling. I hope we can agree to disagree [anothe icky bunch of words]

I'm going to fade away now, but I will lurk.


..

-=ed
 

phsstpok

Splendid
Dec 31, 2007
5,600
1
25,780
Does Passmark report megabits/sec or megabytes/sec?

Not exactly scientific proof but I was watching TechTV Screensaver recently. They were comparing 100 mbit vs 1000 mbit transfers. The 1000 mbit was really fast at transfering a 500 MB file. They didn't put a clock to it but I timed it at 13 seconds which makes for about 38 MB/sec

They were using 4 identical Falcon Northwest "LAN Party" boxes (except for the NICs) and just crossover cables.

<b>56K, slow and steady does not win the race on internet!</b>
 

EdtheROM

Distinguished
Nov 21, 2003
30
0
18,530
<S I I G H H H>

I guess I must have something wrong

I have recently measured between 14 and 17 seconds to transfer over a 260mb file

I'll have to work on it.

Thanks for the words...

.
.
-=ed
 

phsstpok

Splendid
Dec 31, 2007
5,600
1
25,780
Don't forget to factor in your hard drives' speeds.

A really fast IDE drive might be able to maintain 38 MB/sec for reading but not for writing. Single drives will surely average must lower.

Drive write speed will be your bottleneck regardless of gigabit ethernet's theoretical speed or real speed.

I'm sure the LAN Party boxes have nice RAID setups. The systems probably also have enough RAM to buffer a 500 MB transfer so disk speed wouldn't be a factor anyway.

<b>56K, slow and steady does not win the race on internet!</b>
 

phsstpok

Splendid
Dec 31, 2007
5,600
1
25,780
By the way, 260 MB in 14 seconds translates to 18.5 MB/sec or roughly 185 Mbits/sec, 18.5% of gigabit ethernet's theoretical max. That's much higher than the 6% or 7% you mentioned earlier.

I have a feeling that Passmark is reporting network transfers in MBytes/sec not Mbits/sec, so 68 MBytes/sec is not that bad at all.

<b>56K, slow and steady does not win the race on internet!</b>
 

EdtheROM

Distinguished
Nov 21, 2003
30
0
18,530
The drives are 15k 18gb U160 seagate cheetas, which have a external transfer of about 100mb/sec, but I don't know if this is the sustained rate. The transfer rate of the buss is about 260mb/sec, which means 2 drives would be able to saturate it. The controller is an Adaptec 3200 with a 64 bit buss and 128mb ecc cache.

I don't think these is are the problem

I think riser is right about the buss, because I don't think the drive system is too slow, but I'm not sure about the controller, because the adaptec is in a 64bit 33mhz slot. The adaptec manual and product sheets seem to be missing this info, even if only approximate.

I would think that one drive would be able to reach 50-60 mb/sec, two around 100mb/sec, but it isn't.

Have to get a faster board to see what happens, unless something else occurs to me that solves the problem.

thanks



.
.
-=ed
 

EdtheROM

Distinguished
Nov 21, 2003
30
0
18,530
I know. I think passmark is doing something I don't know about. It SAYS Mbit/sec, but that could be a mistake.

If it is a typo rather than in mbits it would read about 63 mb/sec.

oh, well... ;-)

thanks again



.
.
-=ed
 

phsstpok

Splendid
Dec 31, 2007
5,600
1
25,780
Sorry, I should have limited my comments to what I observed in the television program.

I didn't realize you were running 15K drives in a RAID array. (This is what happens when one makes snap judements without reading an entire thread).

I haven't kept up with drive capabilities but looking at the Cheeta X15 data sheet I see the internal transfer rate is 38.0 to 48.9 MB/sec. (I believe this means the sustainable rate). Plus you have two of them. Obviously, HD transfer rate is not your problem.

I take back what I said before (LOL). I have no idea what is limiting your network speed.


<b>56K, slow and steady does not win the race on internet!</b>
 

EdtheROM

Distinguished
Nov 21, 2003
30
0
18,530
I too, make assumptions... then then fun begins..

internal transfer rate is 49-75 mb/sec on my spec sheet, which I just found.

I have no idea yet, but I worry things like a dog with a bone, so I will figure it out when I get a few days to really tear into it and experiment... I hope ;-) ;-)

Riser said the MB is the problem, and maybe he's right

I'm at a loss right now.



.
.
-=ed
 

phsstpok

Splendid
Dec 31, 2007
5,600
1
25,780
49-75? I found the pdf that says the same thing but the oline data sheet said 38.0 to whatever I typed.

Oh well, either way it doesn't explain the lack of performance you are getting.

I'm sure you'll find the problem.

<b>56K, slow and steady does not win the race on internet!</b>
 

Dev

Distinguished
Sep 18, 2001
553
0
18,980
Just a quick note. If you have Gigabit Ethernet you have a teoretical limit of 125 Megabytes/Second (Byte = 8 bits) for half duplex and 250 for full duplex. However many many things affect the real throughput for a network. First all the bottlenecks in the computer whatever they may be, then the load on the computer while running your applications + file transfer. A cable with a kink in it will also slow things down, as will interferences from regular electrical equipment. After all is said and done you'll reach a realistic speed of 60-80% of the theoretical speed, with 80% being almost perfect. You are at about 50% with 65MB/s, which means that you have some bottleneck(s) somewhere, but it is still a decent performance and finding the bottleneck might be hard.

Dev

---
My Sig:
))
(( ___________________
|||_____________|_____|
 

phsstpok

Splendid
Dec 31, 2007
5,600
1
25,780
That's what I thought but EdtheROM stated that Passmark is reporting 64 Mbit/sec transfers not 64 MByte/sec transfers for his network.


<b>56K, slow and steady does not win the race on internet!</b>
 

Dev

Distinguished
Sep 18, 2001
553
0
18,980
Oops, I didn't bother to read all the long posts (nor will I), but my first guess is bad cable. If the cable is known good, make sure that the cable is not running right next to some electric(al) equipment such as TV, Fridge, Microwave, or anything else with an electrical field. Thirdly make sure that all the connections are good between NIC & cable and cable & switch/router.

If that doesn't help more serious troubleshooting is required.

Dev

---
My Sig:
))
(( ___________________
|||_____________|_____|
 

TRENDING THREADS