I know base 2 and base 10. I know computers like base 2 and humans like base 10. I just can't for some reason rap my head around the fact that SD cards, hard drives, SSD, etc, aren't bigger than whats advertised...
And I know that hard drive manufacturers use decimal units but windows uses binary units like so.
Base 10 (decimal) - kilobyte, megabyte, gigabyte, terabyte:
* Manufacturer *
1KB = 1,000n (10^3)
1MB = 1,000,000 (1000 x 1000) or (10^6)
1GB = 1,000,000,000 (1000 x 1000 x 1000) or (10^9)
1TB = 1,000,000,000,000 (1000 x 1000 x 1000 x 1000) or (10^12)
* Windows *
Base 2 (binary) - kibibyte, mebibyte, gibibyte, tebibyte:
1KiB = 1, 024 (2^10) and so on up!
1MiB = 1,048,576 (1024 x 1024) or (2^20)
1GiB = 1,073,741,824 (1024 x 1024 x 1024) or (2^30)
1TiB = 1,099,511,627,776 (1024 x 1024 x 1024 x 1024) or (2^40)
I get that! Here is where I'm getting really confused. So when a manufacturer states a drive is 1TB, isn't that able to store exactly 1,000,000,000,000 bytes? And since a real TB in binary is 1,099,511,627,776 then why does it show less when it comes to a decimal 1,000,000,000,000 TB? If you ask me:
The TiB (binary) is bigger than the TB (decimal).
1,099,511,627,776 - 1,000,000,000,000 = 99,511,627,776
So, if a binary terabyte has 99,511,627,776 gigabytes more than a decimal terabyte, then why does is become less than the actual 1,000,000,000,000 decimal terabyte???
1,000,000,000,000 / (1024 * 1024 * 1024) = 931 gigabytes.
This is what my capacity shows on my 1TB hard drive in Windows. Again, I understand that powers of 2 go:
1 2 4 8 16 32 64 128 256 512 1024
and binary 10010000b to decimal is 9.
This is what gets me below:
1024 bytes - 1000 bytes = 24 bytes.
Does that mean their saying 1000 - 24 = 976 bytes true form total?
As if the manufacturer cheated out the extra 24 bytes per kilobyte, so the operating system takes it out of the 1000 by making it 976 bytes? By making it less? To me it would seem that if the manufacture says a 1000 bytes is a 1000 and windows says its 1024, shouldn't I have 24 extra bytes above what the manufacturer says (1024) instead of under (976)?
So why would that be less? Since 1024 > 1000?
Take the default cluster size on hard disk of the NTFS file system for example, it's 4096. A power of 2 (binary). Kinda of like saying 4096 - 96 = 4000. And then 4000 - 96 = 3904.
Why does the space go down? I don't get it.
Can you shed some light on the matter?
Thank you for your help!
And I know that hard drive manufacturers use decimal units but windows uses binary units like so.
Base 10 (decimal) - kilobyte, megabyte, gigabyte, terabyte:
* Manufacturer *
1KB = 1,000n (10^3)
1MB = 1,000,000 (1000 x 1000) or (10^6)
1GB = 1,000,000,000 (1000 x 1000 x 1000) or (10^9)
1TB = 1,000,000,000,000 (1000 x 1000 x 1000 x 1000) or (10^12)
* Windows *
Base 2 (binary) - kibibyte, mebibyte, gibibyte, tebibyte:
1KiB = 1, 024 (2^10) and so on up!
1MiB = 1,048,576 (1024 x 1024) or (2^20)
1GiB = 1,073,741,824 (1024 x 1024 x 1024) or (2^30)
1TiB = 1,099,511,627,776 (1024 x 1024 x 1024 x 1024) or (2^40)
I get that! Here is where I'm getting really confused. So when a manufacturer states a drive is 1TB, isn't that able to store exactly 1,000,000,000,000 bytes? And since a real TB in binary is 1,099,511,627,776 then why does it show less when it comes to a decimal 1,000,000,000,000 TB? If you ask me:
The TiB (binary) is bigger than the TB (decimal).
1,099,511,627,776 - 1,000,000,000,000 = 99,511,627,776
So, if a binary terabyte has 99,511,627,776 gigabytes more than a decimal terabyte, then why does is become less than the actual 1,000,000,000,000 decimal terabyte???
1,000,000,000,000 / (1024 * 1024 * 1024) = 931 gigabytes.
This is what my capacity shows on my 1TB hard drive in Windows. Again, I understand that powers of 2 go:
1 2 4 8 16 32 64 128 256 512 1024
and binary 10010000b to decimal is 9.
This is what gets me below:
1024 bytes - 1000 bytes = 24 bytes.
Does that mean their saying 1000 - 24 = 976 bytes true form total?
As if the manufacturer cheated out the extra 24 bytes per kilobyte, so the operating system takes it out of the 1000 by making it 976 bytes? By making it less? To me it would seem that if the manufacture says a 1000 bytes is a 1000 and windows says its 1024, shouldn't I have 24 extra bytes above what the manufacturer says (1024) instead of under (976)?
So why would that be less? Since 1024 > 1000?
Take the default cluster size on hard disk of the NTFS file system for example, it's 4096. A power of 2 (binary). Kinda of like saying 4096 - 96 = 4000. And then 4000 - 96 = 3904.
Why does the space go down? I don't get it.
Can you shed some light on the matter?
Thank you for your help!