Despite all of the mobile hardware that wireless connections service, the fastest wireless technologies can't compete with the fastest wired networks. A good Ethernet switch serves up exceptional performance, while enabling the features and capabilities needed to build robust home, small business and enterprise networks. If you would like to know more about how switches work, be sure to read our Network Switch 101.
Networking switches take many forms, ranging from managed and unmanaged switches to those armed with Power over Ethernet and other advanced features. There are many ways to compare and contrast networking hardware, but the Tom's Hardware team created a suite to reflect what we felt were the most important metrics: throughput and response time. Our methodology involves multiple benchmarks, some of which illustrate peak performance, and some designed to generate real-world results under plausible work conditions.
Test System Specifications
We use four systems for our network switch testing:
Test Bench: Tom's Hardware Reference System
Test NUC: Intel NUC5i7RYH
Test Server: ASRock Vision X 471D
Test Laptop: Sony VAIO SVS13112FXS
Testing Suite And Methodology
Our testing suite is split into four different benchmarks: Point to Point, Bi-Directional, Mesh Interference and Response Time. Tests are conducted using Ixia's IxChariot software to measure throughput and response time between systems.
Before the switches are tested, an Ixia endpoint agent is installed on our three client systems (the server is already running IxChariot and does not need another endpoint). Downloadable from the Ixia website, the agent uses a preconfigured script to measure and report the results we're looking to generate via IxChariot's interface.
A control is a test in which the subject is not influenced by variables. In this case, our control is a straight cable test; no Ethernet switch is involved. Throughput is measured from point to point between the ASRock VisionX and Sony Vaio.
Using IxChariot and each computer' IP address, we created an endpoint pair, designating our ASRock Vision X server as Endpoint 1 and our Sony VAIO as Endpoint 2. We chose a script aimed at measuring throughput over Gigabit Ethernet; the same script was used for our Point to Point, Bi-Directional and Interference tests. All other settings remained the same.
Under Run Options, "Run for a fixed duration" is set to one minute. The control test and all subsequent tests are conducted under a one-minute cap.
Once the test is complete, the Throughput tab illustrates the minimum, maximum and average speeds. The line graph below illustrates performance at each point in time. We'll be comparing this data to the results from our Point to Point, Bi-Directional and Mesh Interference tests later.
Our first variable metric is the Point-to-Point TCP Throughput Test, which measures the throughput from one endpoint to another. The process is exactly the same as the Control Test, but it runs through our network switches. As in the Control Test, the active members are the ASRock VisionX and Sony VAIO laptop; the other two computers are offline.
The image above shows how IxChariot charts its results. In this Point-to-Point Throughput Test, you can see a slight but evident difference between the Control Test and Network Switch runs. The delta isn't particularly noteworthy; after all, adding a switch simply introduces a bridge for information from the server to travel through—a lone traveler on an empty bridge should not encounter much resistance on its journey. Likewise, our stream of benchmark data is not obstructed.
Testing throughput from the server to the laptop (and vice versa) over a network switch introduces more network traffic and produces lower average throughput. As with the previous benchmark, one pair designates the server as Endpoint 1 and the laptop as Endpoint 2. A second pair is created in IxChariot with the laptop as Endpoint 1 and the ASRock VisionX as Endpoint 2.
This chart shows the effect of driving more traffic through a network switch, demonstrating lower throughput across the board. In our bridge allegory, the traffic would be two travelers crossing the same bridge. Depending on how wide and well-built the bridge is, they might bump into each other, brush shoulders or walk past each other effortlessly.
The Mesh/Interference Test generates even more traffic through the switch. The results from the Mesh Test will suggest the performance level you might expect in a practical setting. After all, network switches are meant to have multiple devices connected to them. We want to stress each switch by flooding it with increasing amounts of data. Adding traffic in measured increments gives us an idea of how well the switch performs with two, three, four or more clients attached.
This test splits connections into three pairs: test bench to the NUC, NUC to the server, and server to the laptop.
In this example, more travelers with different destinations use the bridge, and contact with other travelers slows them down. Adding more connections to the switch diminishes overall throughput. Certain pairings, such as that of our Intel NUC to the server, delivered lower average, minimum and maximum throughput in multiple tests. If our bridge were bigger and sturdier, there would be less interference. A business-class switch, for example, is meant to handle high levels of traffic with numerous systems connected.
Response Time Tests are a little bit different. The same procedure for creating a Point-to-Point Test is used—a pairing between the ASRock VisionX and Sony VAIO laptop is created, and the run duration is set to one minute. But we use a different script tailored to measuring the response time of the switch.
After the test is completed, this error message should appear: "IxChariot cannot show the response time if the results are below 20 milliseconds." To find the results of the test, switch to the Transaction Rate tab rather than the Response Time tab. There, you find the Transaction Rate average. To find the response time, use the following equation: 1/Average Transaction Rate x 1000 = Response Time
The lower the number, the faster the response time.
In this sample, the Transaction Rate average is 3437.435. Divide 1 by 3437.435, and multiply the result by 1000. The response time is rounded to 0.291 milliseconds.
Response times slightly from switch to switch. And at under one millisecond, you won't notice a difference from one model to another. But knowing exactly how fast switches perform may help you get the best value for your dollar.
As mentioned in the introduction, there is lots of variety in the network switch market. Each manufacturer sells different models armed with distinct feature sets. When we first started testing Ethernet switches, we devised a test suite that catered to as many products as possible, knowing that we'd have to be adaptable—so consider this How We Test a version 1.0.
In the coming weeks, we will consider different approaches for testing specialty switches, such as managed switches and smart switches. Any suggestions regarding revisions to our current test suite and/or new procedures for specific products are always welcome!