Gigabit Ethernet Transmission - What's Reality?

MikeAKQJX

Distinguished
Nov 27, 2010
2
0
18,510
Hello folks.

I started a research project a few days ago and I had multiple sets of input from these forums. But I also have a series of tools and calculators available at my disposal... all of which yield different results. So, even though this is an exercise in sizing and theory, it's hard to feel confident about using or creating a tool with so many different points of input.

So here goes… My project is to setup a small Gigabit Ethernet network (which doesn't exist in real life) and design a solid backup solution for about 20 servers. This part of the project is only concerned about the network transmission part of the backup scenario.

From all my research, I can say the one thing that is constant is that 1000BaseT (GbE) has a raw transmission threshold of 1 billion bits (1,000,000,000). Unfortunately, after that, everything sways. Between talk about MB, GB, MiB, GiB, etc... there is a lot to content with as far as opinions.

So this is the theory I came up with, step by step to follow the math and the logic...

1. 1000BaseT / 1GbE raw transmission capability is capped at 1,000,000,000 physical bits per second.

2. If you divide this number by 10, you get the number of bytes that can be transmitted. So why 10 and not 8? I am trying to account for the 8b-10b conversion algorithm where bits become encoded in the datagram frames. So if you send a bit (8 bytes) from one endpoint to another endpoint of a 1GbE network, what happens is the 8b-10b algorithm operates on the data and in effect, transmits 10 physical bits for every 8 bits of data. This yields 100,000,000 real bytes that can be truly transmitted.

3. Up-convert Bytes to KB with a multiple of 1,024, then KB to MB with a multiple of 1,024. You end up with 95.3674 MB/sec as the upper limit.

Simplified, my formula looks like this…
(1,000,000,000 / 10) / (1,024 * 1,024) = 95.3674 MB/sec

Some calculations stated here and around the internet indicate a different formula and result on the real amount of data that can be transmitted.

The first one is…
(1,000,000,000 / 8) / (1,024 * 1,024) = 119.2093 MB/sec

And the second one is…
(1,000,000,000 / 8) / (1,000 * 1,000) = 125 MB/sec

---

OK folks. Is my premise correct or are one of the other premises correct; network overhead aside? I need someone to let me know if I’m on the right track or not. Thanks in advance everyone.
 

MikeAKQJX

Distinguished
Nov 27, 2010
2
0
18,510
Emerald... Appreciate the feedback. Specifically for the project I am working on, the network is going to be designed to be the slowest link, capitalizing on high-scale PCI or Infiniband bus architecture with disk as a primary target in a RAID-6 set with 8 working drives and 2 hot swapbale drives when needed.

After some more research, I am now thinking I have 2 correct answers.

The first currect answer takes advantage of 1000BaseT over optical cables, hence the 8B/10B encoding penalty built in...
(1,000,000,000 / 10) / (1,024 * 1,024) = 95.3674 MB/sec

But then after a careful look, Gigabit switches using Category 6 cabling (that can transport 1000BaseT over twisted pairs) does not use the 8B/10B encoding technology. As such, the second correct answer makes this formula work under these circumstances...
(1,000,000,000 / 8) / (1,024 * 1,024) = 119.2093 MB/sec

Still, it would be nice to see if I am on the right track with this or not as far as the calculations are concerned. Anyone out there willing to check my work?

-Mike