Well, a drive that holds 1,000,000,000,000 is a true TB drive, at least in the way most humans count millions, billions, etc.
IMHO it's Microsoft that's being silly by using binary TB instead of decimal TB. They're supposed to be trying to make Windows friendly for non computer-literate users, yet their choice of non-intuitive measures like this causes endless confusion. This very question comes up all the time on these and countless other fora.
There's no reason why binary TB has to be used because there's no reason a drive has to hold exactly 2^40 bytes. And I say this coming from a background of 40 years in computer work, with decades of assembly language experience and a thorough understanding of binary number systems.
I really wish they'd at least offer an "Advanced Setting" in the folder options to let people choose which measurement units they'd prefer to use.
I agree. I have about 4 years experience with mucking about with computers, and 10 years of experience in general usage. i only started noticing the "missing" GB's when i bought a new hard drive a few years back and its bugged me ever since... tbh i could spend hours talking to some one who knows computers inside out, i have hundreds of questions but when ever i get a chance...I completely forget all of them ¬¬ Sod's law eh lol
The funny thing is that cache sizes on HDDs are given in binary units again, but use the same suffixes. Thus if they want to be consistent they would say 500GB storage capacity and 32MiB cache; but they say 500GB storage capacity and 32MB cache; so they are mixing binary and decimal units themselves.
I don't think decimal units have much relevance in computer systems; everything else is binary. If you store a file of 4 bytes; it still will take up at least 512 bytes; everything it tied to binary system and using just decimal units would be better if you're a HDD vendor and can sell 8% less capacity by switching the units to decimal instead.
I do agree the confusion is mainly Windows' fault, which just uses binary units of decimal prefixes; very confusing and just plain wrong. But i think using decimal units to describe capacity is just something we've accustomed to, but not necessary the 'right thing'. SSDs seem to do the same thing; but they really are 64/128/256GiB; not decimal units but binary units.
Still, they reserve some space and give you 60 gigabytes (not GiB) instead; so both the 4GB is gone and also the difference between GiB/GB. That makes binary capacities so hard to understand and get a right feeling for them, they're not making it very easy for consumers to keep up.
I don't think decimal units have much relevance in computer systems; everything else is binary.
That's true at a low level, but when you're using GUI you're about as far removed from that level as you can possibly be. The problem is that the use of binary units leads to completely unnecessary inconsistencies. For example, if I'm transferring a 10GB file at 100MByte/sec (and transfer rates ARE universally quoted in DECIMAL metrics) I'd expect it to take 100 seconds, not 107 seconds.
As far as I'm concerned, using binary units in the GUI makes about as much sense as calibrating a car speedometer in "axle RPM". It may be relevant to the engineer who designs the transmission, but doesn't mean it's something the driver should have to concern himself about.