The "10 bits to the byte" thing is a rough "rule of thumb" from the days of modem communication over phone lines. There were several protocols used that included error correction checks. A common type was inclusion of a Parity Bit in each byte sent. In order to send a full 8 bits of data in a "byte", this meant the signal actually had to include a ninth bit containing the Parity information. In addition, many protocols added a dedicated "Stop Bit" in each "byte" to help the receiving modem identify the end of one byte before another began. Of these two, Parity could be Odd, Even, or None, and there could be 0,1 or 2 Stop Bits. Probably the most common combination became "8,N,1" for 8 data bits, no parity bit, and one stop bit. That's a total of 9 bits transmitted for each "byte" which actually contained only 8 bits of data. On top of that, there was a little bit of behind-the-scenes communication between modems, plus the occasional re-transmit request when a transmission error was detected by the receiver. Net result was that one byte of real data might take close to 10 bits of actual data transmitted over the long-term average. So, converting from bits per second to bytes per second was a simple divide-by-10 exercise. What could be easier? That is, if you're a human with 10 fingers. If you're happier in binary, you could do three Roll-Right operations for a divide-by-8 result and use that approximation, because refining that to divide-by-10 is a lot more work!