The definition of "kilo" (binary vs. decimal) affecting HDD capacity

lschmidt

Distinguished
Sep 21, 2006
97
0
18,630
I've been trying to figure out why my Seagate Barracuda 500 GB hard drive shows up as only a 465 GB hard drive on my computer. I finally figured it out...

Apparently HDD manufacturers (at least Seagate) use the decimal definition of "kilo" meaning 1,000 versus the binary definition of 1,024. So to your computer, 1 kilobyte is 1,024 bytes but the decimal definition is 1,000 bytes. So, in decimal notation 1 megabyte is 1,000,000 bytes and 1 gigabyte is 1,000,000,000 bytes (versus 1,048,576 bytes and 1,073,741,824 bytes, respectively - in binary notation).

So, if you go by decimal notation, 500 GB is equal to 500,000,000,000 bytes. That is what Seagate uses to market their hard drives. When you hook up your hard drive though, those 500,000,000,000 bytes are really equal to ~465 gigabytes because again, in binary there are 1,024 bytes in a kilobyte (not 1,000).

500,000,000,000 bytes / 1,073,741,824 bytes = decimal notation of 500 GB / binary notation of 1 GB = 465.66

...which is why my 500 GB shows up as only 465 GB.

Am I the only who didn't know this or what? Is this common knowledge?

If you want the links, here are two:

http://www.seagate.com/ww/v/index.jsp?locale=en-US&name=Storage_Capacity_Measurement_Standards_-_Seagate_Technology&vgnextoid=9493781e73d5d010VgnVCM100000dd04090aRCRD
http://seagate.custhelp.com/cgi-bin/seagate.cfg/php/enduser/std_adp.php?p_faqid=336
 

shoota

Distinguished
Jul 3, 2007
221
0
18,680
this is common knowledge for most ppl here. but you explained it very well. even tho most ppl know about it i still haven't heard a good explaination as to why hdd man. can't use binary to market their drives... anyone know?
 

sturm

Splendid
I did a little thinking and here is my thoughts. When hard drives were really small, like 10, 20 MB you really didn't loose that much. A 20 MB disk was actually 19.073486328125 MB. It made sense from a marketing stand point to just call it a 20 MB disk. To increase capacity HD makers just add another disk so a 20 MB drive would then be 40 MB. Now in real bytes you would need to mark the drives as 19 MB and 38 MB. 20 and 40 just look better.
As drives kept getting bigger and bigger they just never changed.
 

rodney_ws

Splendid
Dec 29, 2005
3,819
0
22,810
I agree this is common knowledge for many people here, but the OP did do a good job of explaining it. I'm sure there are plenty of people here wondering where the rest of their drives went that haven't bothered to ask the question. Manufacturers should just make a gentleman's agreement to truthfully advertise their capacities. With storage costs so cheap now, I'm not sure there would be any advantage to artificially inflating your stated capacity.
 

scudst0rm

Distinguished
Feb 17, 2008
157
0
18,710
the "binary notation" actually has its own units.

http://en.wikipedia.org/wiki/Gibibyte

where 1 GB in binary notation should actaully be 1 GiB (giga binary byte)
technically giga should only be used to represent 1 billion as it is an SI unit. but at some point someone thought it was too confusing to have both GB and GiB so they started using GB to refer to both. as sturm said, this was back in the days of megabyte ( or kilobyte ) hardrives where the difference in the two units was negligible.

I sometimes see applications, such as p2p clients, using the MiB units. I personally believe now that we are in the age of the terabyte HDD we there should be an effort to reintroduce these units into the mainstream. I realize the HDD marketers will want to use the bigger sounding units, but I should at least be able to my C drive and see either 320GB or 298GiB.
 

firemist

Distinguished
Oct 13, 2006
209
0
18,680
Different take on this altogether. The advertised disk capacities are correct. A 500 GB drive is able to store 500 GB (as you are taught to count in elementary school). The disk space reported by the OS is under reporting the amount of space available because it is ignoring 24 bytes out of every 1000 it counts. Open windoze explorer and check the properties of the c: drive and you will see two values reported for 'used', 'free', and 'total' capacities. Check the properties tab on any file on your disk and you will see two values for 'size' and two for 'size on disk' and neither one of them matches the value reported in explorer.

From my experience I believe the origin came from being lazy and shortening (or rounding) 1024 to 1000 and refering to it a 1k, or 4096 to 4000 and refering to it as 4k, etc. All the designers and assembly language programmers I worked with knew the context, understood it and used it that way. It was a lot easier to say 1k than 1024 and became shorthand speak.
 

ira176

Distinguished
Mar 19, 2006
240
0
18,680
I wish the storage companies and memory companies would use just one standard. And I wish that standard that is used is the way memory is sold.
 

grunchlk

Distinguished
Sep 11, 2007
22
0
18,510
Why do products in supermarkets cost 19,95 and not just 20?
Because 19,95 feels a cheaper than 20,00.
And because 1999,85 is A LOT cheaper than 2000,00 .. for some people.

Years ago, everyone who had a little bit of programming experience knew that 1KB = 1024 Bytes and 1MB = 1024KB (well, actually, in that time 1MB was utopia...)
It was simple the way computers 'worked'.

I can't imagine that HD manufactures didn't know this.
It's deliberate: it's marketing!

Personally, I think HD manufactures are Ferengi's in disguise......
 

lschmidt

Distinguished
Sep 21, 2006
97
0
18,630
Here is an interesting question then...my ATT U-verse internet package is supposedly rated at 6 megabits per second download. Does this really mean 768 binary kilobytes (128 binary kilobytes in 1 binary megabit, binary) or 732.42 binary kilobytes (122.07 binary kilobytes in 1 decimal megabit)?

Any ideas on that?
 

SomeJoe7777

Distinguished
Apr 14, 2006
1,081
0
19,280


First of all, understand that you never "lose" anything. Just because the OS manufacturers and the hard drive manufacturers are using different definitions of prefixes does not mean that the ability to store bytes evaporates into thin air. If you buy a 500 GB hard drive, you bought the ability to store 500,000,000,000 bytes. Whether that is reported as 500 GB, 465 GiB, or (incorrectly) 465 GB, you didn't "lose" anything.

Back in the very old HD days when HDs were measured in MB, many HD manufacturers actually used the 1024 definition. A 20 MB drive back then actually stored 20,971,520 bytes (20 MiB). One HD manufacturer switched over to using the 1000 definition to give themselves a boost in the advertised number, and then the other HD manufacturers had to follow suit to compete, and its been that way ever since.



No, the OS is not underreporting anything. It's using a different definition of the prefixes (perfectly legal), and an incorrect abbreviation (GB vice GiB) for the units its actually using.



Exactly correct.



Telecom and network data rates use base 10 prefixes. A 6 Mb/sec connection should be transferring 6,000,000 bits per second (750 KB/sec, or 732 KiB/sec).