unnameduser

Distinguished
Mar 4, 2009
33
0
18,530
I for one am all for the binary system of measuring all things computers. But HD manufacturers use decimal because it looks better and Sata is all screwed up. The newest standard of Sata being developed is 6.0Gb/s. I believe what they mean is 600MB/s using the decimal system and 572.20458984375 using the binary standard (or maybe the 600 is binary but either way). How do they get 10 bits to the byte. How fast is a Gigabit network connection? 100MB/s or 128MB/s or 125MB/s?

A joke I heard: There are only 10 kinds of people in this world. Those who understand binary and those who don't.
 

yadge

Distinguished
Mar 26, 2007
443
0
18,790
Well I guess technically the hard drive people are correct. It's the operating systems that are incorrect. Because 1 gigabyte is exactly what it's name implies: 1 billion bytes. But the OS measures in binary. So what instead of telling us we have about 74 gigabytes after installing an 80gb drive, they should really tell us it's about 74 gibibytes.

http://en.wikipedia.org/wiki/Gibibyte

So.. the OS needs to change what it tells us, from gigabytes to gibibytes, and the hard drive people need to start measuring in gibibytes.

Oh, and nice joke. lol
 

Paperdoc

Polypheme
Ambassador
The "10 bits to the byte" thing is a rough "rule of thumb" from the days of modem communication over phone lines. There were several protocols used that included error correction checks. A common type was inclusion of a Parity Bit in each byte sent. In order to send a full 8 bits of data in a "byte", this meant the signal actually had to include a ninth bit containing the Parity information. In addition, many protocols added a dedicated "Stop Bit" in each "byte" to help the receiving modem identify the end of one byte before another began. Of these two, Parity could be Odd, Even, or None, and there could be 0,1 or 2 Stop Bits. Probably the most common combination became "8,N,1" for 8 data bits, no parity bit, and one stop bit. That's a total of 9 bits transmitted for each "byte" which actually contained only 8 bits of data. On top of that, there was a little bit of behind-the-scenes communication between modems, plus the occasional re-transmit request when a transmission error was detected by the receiver. Net result was that one byte of real data might take close to 10 bits of actual data transmitted over the long-term average. So, converting from bits per second to bytes per second was a simple divide-by-10 exercise. What could be easier? That is, if you're a human with 10 fingers. If you're happier in binary, you could do three Roll-Right operations for a divide-by-8 result and use that approximation, because refining that to divide-by-10 is a lot more work!
 

unnameduser

Distinguished
Mar 4, 2009
33
0
18,530
I just don't like trying to say gibibyte. You never buy ram according to the decimal system. A gig of ram is 1024 MB and it is always bought in multiples of 2. 256, 512, 1024, 2048 MB. I don't like GiB, but dislike even more the dualality of the GB.
 
Sata 6.0Gbps is a true 6 gigabit standard. However, due to parity check bits and stop bits, as mentioned above, it actually has to send 10 bits of information for every 8 bits of actual data, resulting in 10 bits to the byte of actual data on a SATA cable. Therefore, 6Gbps sata has an actual useful transfer rate of 600MB/s. This has nothing at all to do with the decimal vs binary capacity. It is purely related to the SATA transfer protocol.

As for networking, I'm not as familiar, so I can't say anything about the max useful rate on a gigabit network.