Sign in with
Sign up | Sign in
Your question

HDD space a kind of nit picky problem.

Tags:
  • Hard Drives
  • Formatting
  • Storage
Last response: in Storage
Share

Do you want to buy a 300GB HDD with 300GB of actual space to use?

Total: 11 votes

  • Yes
  • 55 %
  • No
  • 19 %
  • Don''t care
  • 28 %
August 15, 2006 1:23:56 AM

Ok this might seem kind of dumb or whatever to some people but this is my problem. Ok granted this is not a real problem just something that bugs me. Ok you buy a new hard drive that is say 300GB not already becauce of the differing was that the hard drive maker and an os read space you are going to lose "x" number of GB's and the you lose more space after you format the drive. I don't know exact numbers as far as loss of space due to formatting but lets just say anywhere from 500MB to a GB of space loss due to formatting. (I can and probally am way off on the amount of space loss to formatting correct me if you like) So you buy a brand new 300GB drive and you lose say 5 or 6GB's of space without anything even bieng "put" on the drive. Now again I could probally be wrong but the larger the drive the more space lost to formatting and differing counting methods, again correct me if I am wrong.

So in reality you are not getting a 300GB drive in real world terms. You lose X amount of space before you even really begin to use the drive. For example I have a 160 GB external HDD I got for christmas well once you plug it in its only 149GB due to the above cited reasons. So in reality I didn't get a 160GB drive I got a 149GB drive, as far as it is usable. That has always bugged me if I extrapolate and assume the same amount of space loss then a 300GB drive is going to lose around 20 to 22GB of usable space. Granted it is there but it is gone because of differing counting methods.

I have always been of the opinion that the HDD maker knows about the amount of loss it is just a fact of life, and no there is no point of suing them or anything like that since by there count the advertised capacity is correct. But why not increase the capacity, at a bit of an added cost of course and I belive most people would have no problem paying an additional 10 or 20 bucks, to give use the space that we wanted when we bought the drive. If I wanted 300GB of storage I would basically have to buy a 350GB drive to get the amount of storage I origonally wanted.

Now am I alone in feeling this way or do the rest of you kind of feel the same way? Again I have nothing against the drive makers becuase it is just a difference in decimal counting and binary counting that leads to the "space loss" but in the era when we are very soon going to be seeing single TB and larger HDD's your getting into the "loss" of 40, 50 and in the range of 100 or more GB's "lost" which for someone like me who tends to fill up HDD's fairly quick is kind of a pain in the a$$. I would rather pay X dollars more and get a drive that has the advertised storage capacity, am I wonrg in thinking this way or do you agree.

Sorry for any spelling or factual screw ups.

More about : hdd space kind nit picky problem

August 15, 2006 1:48:25 AM

turn off System restore ...............or reduce the amount of space (I believe it reserves about 12 % of total hdd space) that it keeps for back up.....that ought to make you a lil more happy............beyond that....all you can hope to do is grin n bear it bro. That or just by a larger hdd. But try the system restore thing, I think that will work for ya.

RIG specs
Antec P180 PerformanceSeries Mid-Tower Case
SeaSonic S12 600 watt power supply
Asus A8N32 SLI mobo AMD N-Force 4 SLIX16 (bios 1103 V02.58)
RealTek 97 onboard digital 5.1 Surround
AMD Athlon 64 X2 4800+ Toledo Core, 2 X 1mb L2 cache (AMD driver 1.3.1.0 w/MS hotfix & AMD Dual Core Optimizer)
2 gigs of Corsair TwinX3500LL Pro @ 437Mhz 2-3-2-6-1T
2- BFG 7900 GT OC 256mb in SLI (nvidia driver 91.31)
Western Digital RAPTOR 74.3 gig 10-K rpm HDD for XP & Apps
Maxtor SATA II 250 G HDD for gaming, movies, MP3's
Maxtor SATA II 250 G HDD for document backup (unplugged)
Sony CDrom 52X
Plextor 708-A DVD/CD rom
Logitech Z-5500 Digital 5.1 THX 500watts
August 15, 2006 2:23:47 AM

I am not really concered about space once I start using the drive because any used space is of course up to me. I just mean about initialy buying the drive itself with out even putting anything on there. Again like I said in the topic it is kind of a nit pick gripe and nothing that will probally ever change.
Related resources
August 15, 2006 2:55:17 AM

hdd companys count their mb and gb and so on in 1000, while windows does 1024 theres your loss of space
August 15, 2006 2:57:22 AM

I think the bigger issue is the counting methods as opposed to formatting.

"One gigabyte, or GB, equals one billion bytes when referring to hard drive capacity."

http://www.seagate.com/docs/pdf/marketing/po_barracuda_...

On the other hand, your computer thinks 1 gig = 1024 megs, 1 meg = 1024 kb, 1kb = 1024 bytes. That means your computer thinks 1 gig = 1073741824 bytes. Therefore you lose about 7% of your harddrive space to a marketing ploy.

The little lost to formatting is a much smaller amount I think.
August 15, 2006 3:01:30 AM

exactly, and that is about 4-8 kb and if you mad about that little bit of space you need to check your blood pressure :lol: 
August 15, 2006 4:09:57 AM

Bio, I feel a bit the way you do.

I dislike the fact that HD manufacturers have re-invented the meaning of gigabyte to be 1 billion (1000^3) instead of 1024^3.

Aside from it being an obvious marketing gimmick to inflate the number of "gigabytes" per dollar, it is also a way of presenting capacity that "joe eternally clueless (and sometimes very dense) consumer" can actually understand.

Part of the problem (for joe consumer) is that Windows reports the capacity in *both* decimal values and binary values. What I mean is, when you check drive capacity in Windows explorer, you'll get two numbers, one that says 1,000,000,000 (1 billion) and another right next to it that says 953MB. It is so much easier for HD manufacturers to tell "joe" to look at the one billion figure and call it a gigabyte than to explain to "joe" that a true gigabyte is 1024^3.

For the technically inclined, baptizing a gigabyte as 1 billion bytes is a bit offensive but for the less technical it makes things a lot easier. In addition to that, it allows the manufacturers to "inflate" the number of gigabytes so everyone is a winner except those that know that 13h in a computer means 19d to a human being.

What happened to the value of a gigabyte reminds me of a joke that goes like this,

An atheist moves into a neighborhood of very devout catholics. Come Friday, the atheist is bbqing some bacon wrapped pork chops sending quite a scent thru the devout neighborhood.

Some catholics, those having an olfatory sense as developed as their religious conviction, decide that the most practical way to prevent the presence of such scents on Fridays is, to convince the atheist to become catholic. After a friendly meeting consisting of many catholics and one lone atheist, a deal is reached whereby the atheist will be baptized catholic this upcoming Sunday.

On Sunday, 10:00 a.m sharp, a proud and solemn catholic priest sprinkles sacred water upon the atheist solemnly declaring "you were born atheist, you were raised atheist, but now, you are *catholic*" The remainder of the beautiful Sunday goes by uneventfully.

Come the following Friday, the new catholic is bbq-ing a well seasoned 3 pound t-bone that exudes a strong and flavorful scent throughout the neighborhood. Immediately, a group of devout catholics head over to the new catholic's back yard to remind him that meat cannot be consumed on Fridays, only fish!.

As they arrive, they find the new catholic sprinkling A1 sauce onto the t-bone while solemnly declaring "you were born beef, you were raised beef, but now, you are fish!!!"

As you can surmise, HD manufacturers are probably a bunch of atheists!! :lol: 

Hope that helps.
August 15, 2006 7:27:59 PM

Quote:
I dislike the fact that HD manufacturers have re-invented the meaning of gigabyte to be 1 billion (1000^3) instead of 1024^3.


I would disagree with that. HD manufacturers didn't "reinvent" anything. The prefixes kilo-, mega-, giga-, etc. are all from the International System of Units (typically abbreviated "SI"), which was established in 1960 as a derivative of the French Le Système International d'Unités. The SI system is the modern metric system of units and their associated abbreviations.

Giga-, in SI units is defined to be a prefix meaning a multiplier of 1,000,000,000 (1 billion).

It was computer software writers and OS manufacturers that wanted to use the SI prefixes but conveniently used powers of 2 rather than the SI standard powers of 10 because it made computations easier when written in machine code.

That wasn't a big deal back in the DOS days because the error was only 2.4 % for misusing kilo- as meaning 1024 vice 1000. However, as the prefix gets larger, the error gets larger as well. Now, at the Giga- level, you see that the error is now 7.4 %. At the Tera- level, the error jumps to a full 10 %.

Also, it's important to remember that only the prefixes give you a false sense of size. If you talk about the size of anything in bytes (no prefix), then there is no confusion.

As an example, take the 300GB hard drive. That drive holds 300,000,000,000 bytes (at least, possibly a little more). 300 billion bytes is 300 billion bytes, no matter how you report it with prefixes. If the hard drive manufacturer wants to call that 300GB and Windows wants to call it 279GB, it really doesn't matter. It's still 300 billion bytes.

There is a movement underway to introduce new prefixes into the SI system of units to deal with this issue. Kibi-, Mebi-, and Gibi- are the new, additional prefixes proposed to deal with powers of 2 rather than powers of 10. The K, M, and G are retained for ease of equating them with the multipliers that everyone's familiar with, while the "bi" is used to denote binary representations (powers of 2) rather than powers of 10. The abbreviations used are Ki, Mi, and Gi.

Thus the hard drive manufacturer says it's 300GB, and Windows should say it's 279GiB. They are equal, by the way (300GB = 279GiB = 300 billion bytes). This avoids all confusion, and avoids having to infer whether someone means powers of 2 or powers of 10 when using a prefix.

In the end, don't let yourself be disappointed or feel shortchanged just because the number in the "Properties" dialog box says 279 instead of 300. You still bought (and received) your 300 billion bytes of space.
August 16, 2006 12:13:18 AM

I totally understand the differences are due to the differences between base 10 and base 2 counting systems. I never said I felt short changed in buying a drive with X GB's. As I said it is kind of a nit picky kind of situation for me. I know in reality I am still getting 1,000,000,000 bytes but the prefixes we use to denote the amounts are for the most part used exclusivly in regards to computers, from what I have seen and read. You dont say I have a gigapples or a megacars to denote an amount of something. You would say I have a billion apples or a million cars. The kilo, mega, giga etc. are for the most part only used in regards to space on a drive, disc ram etc. (I know dont give me the metric system thing) Yes were splitting hairs with this but a gigabyte is a gigabyte in binary terms thats pretty much it for joe schmo consumer.

While denoting a GB as 1,000,000,000 might be technically true, its not true in the real world because of the aforementioned discrepancy in counting methods. When you buy a drive in the real world you are not getting 300GB's you are getting 279GB's. Yes you are getting 300 billion bytes so "technically" it is true but in the real world your only getting 279GB's of actual space. All computers are binary systems at their heart, not including quantum computers but those dont count at this time, therefore when referring to computer storage binary counting is what should be used. You would of course run into problems with regards to chip speed, etc otherwise.

As I have said before it is really a non issue because we are basically debating semantics as far as terminology is concerned.
August 16, 2006 12:46:30 AM

Quote:
... the prefixes we use to denote the amounts are for the most part used exclusivly in regards to computers, from what I have seen and read. ... The kilo, mega, giga etc. are for the most part only used in regards to space on a drive, disc ram etc. ... While denoting a GB as 1,000,000,000 might be technically true, its not true in the real world because of the aforementioned discrepancy in counting methods.


While in the computer industry, the prefixes virtually always are implicitly understood to represent powers of 2, that is only confined to the computer industry. There are many industries where that isn't the norm.

For example, take the telecom industry. They've been using the prefixes for decades to describe data rates and always use them to mean powers of 10, like the SI system dictates. Ethernet moves data at 10,000,000 bits per second, Fast Ethernet at 100,000,000 bits per second, and Gigabit Ethernet at 1,000,000,000 bits per second. A telco DS1 (T-1) line moves data at 1.544 Mbps = 1,544,000 bits per second, which is 24 DS0 channels of 64Kbps (64,000 bits per second), plus inter-frame spacing.

So while your definition of the "real world" dictates that powers of 2 are the norm, that's only because you work with computers. If you worked with telco equipment, you'd be on the other side of the fence. 8)
August 16, 2006 1:07:18 AM

Quote:
... the prefixes we use to denote the amounts are for the most part used exclusivly in regards to computers, from what I have seen and read. ... The kilo, mega, giga etc. are for the most part only used in regards to space on a drive, disc ram etc. ... While denoting a GB as 1,000,000,000 might be technically true, its not true in the real world because of the aforementioned discrepancy in counting methods.


While in the computer industry, the prefixes virtually always are implicitly understood to represent powers of 2, that is only confined to the computer industry. There are many industries where that isn't the norm.

For example, take the telecom industry. They've been using the prefixes for decades to describe data rates and always use them to mean powers of 10, like the SI system dictates. Ethernet moves data at 10,000,000 bits per second, Fast Ethernet at 100,000,000 bits per second, and Gigabit Ethernet at 1,000,000,000 bits per second. A telco DS1 (T-1) line moves data at 1.544 Mbps = 1,544,000 bits per second, which is 24 DS0 channels of 64Kbps (64,000 bits per second), plus inter-frame spacing.

So while your definition of the "real world" dictates that powers of 2 are the norm, that's only because you work with computers. If you worked with telco equipment, you'd be on the other side of the fence. 8)

Oh I agree I meant it specifially in refrence to the computer indusrty. I may not have made that clear. I realize there are many other industries that use those prefixes but I was refering to specifically computer storage terms. I thought a T-1 line was faster than 1.544Mbps, I have a cable modem with a "possible" 3 Mbps speed the most I have managed is right around 1Mbps downloading from bittorrent. But I guess a T-1 is a dedicated line so you will most likely be getting close to full bandwidth. But I mean you can get even 1Mbps DSL lines now for what I would think is cheaper than T-1 and that should give you good speed.

Speaking of bandwidth I just ran a badwidth calculator from cnet.com and while I cant vouch for the accuracy for the test since I have no idea how it is performed this was my result.

http://reviews.cnet.com/7009-7254_7-0.html?CType=2273&a...

and here is another quick test I ran

http://www.bandwidthplace.com/speedtest/results.php

Which is almost T-1 speed for about 1/3 the cost.
August 16, 2006 2:53:57 AM

You are correct in the fact that hard drive manufacturers didn't re-invent the meaning of gigabyte as I stated in my previous post. Strictly speaking it was a nascent computer science, about 50 years ago, that misused the prefixes.

Flawed as it may have been, since the birth of computers, Kilo in the computer field, has always meant 1024 instead of the formal 1000. Mega, Giga, Tera, etc, were correspondingly misused.

By now there are hundreds of thousands (millions perhaps ?) of computer science books, journals, technical and scientific documents related to computer technology that implicitly and explicitly use the prefixes with their "binary" values.

Because of this, I support the fact that in the context of computer technology, the meaning of those prefixes is different than the one they have in a decimal context. It certainly would not be an isolated case in which context changes the meaning or value of a word.

It is my opinion that hard drive manufacturers chose the decimal meaning only because it makes their product sound more than it actually is. 300GB in the computer field is more than 300 billion. It is also quite conspicuous that hard drive manufacturers are the only ones trying to stick to the decimal system. Why aren't memory manufacturers doing this ? Why aren't Intel and AMD specifying the cache size of their CPUs in decimal ? Why isn't anyone else in the computer field doing this ?

If, as a programmer, I request a memory buffer of 3MB from Windows (or Linux, or MPE, or Guardian, or Pick, or OS/360, or VMS, or MVS, or younameit OS) and I only get 3 million bytes, we have a big problem. Come to think of it, this might be the reason there are so many known cases of buffer overflows in Windows. (kidding of course!)

I find the difference annoying because, for one, it is the only exception and, second because I believe the exception is being made for self-serving purposes instead of any purpose of value.

Peace.
August 16, 2006 2:03:08 PM

Quote:
It is also quite conspicuous that hard drive manufacturers are the only ones trying to stick to the decimal system. ... Why isn't anyone else in the computer field doing this ?

If, as a programmer, I request a memory buffer of 3MB from Windows (or Linux, or MPE, or Guardian, or Pick, or OS/360, or VMS, or MVS, or younameit OS) and I only get 3 million bytes, we have a big problem.

I find the difference annoying because, for one, it is the only exception and, second because I believe the exception is being made for self-serving purposes instead of any purpose of value.


I don't think everyone in the computer field is doing this. The aforementioned data rates example comes to mind.

Intel claims their Gigabit Ethernet cards move data at 1 Gb/sec, which is exactly correct. Their use of Giga- in that instance is a power of 10, not a power of 2. Modem manufacturers as well, use power of 10 (56 Kbps = 56000).

When you request memory on an OS, you don't use a prefix. 8) You request your memory as a decimal number, constant, or multiplied-out number. The malloc() API of C doesn't understand prefixes. So there shouldn't be any confusion there.

Now, as to whether HD manufacturers intentionally chose the method to give them larger numbers -- I don't know. That's a matter for debate, and no one from Western Digital, Seagate, or anyone else is ever going to admit to the real story.
August 17, 2006 12:45:55 AM

Quote:


I don't think everyone in the computer field is doing this. The aforementioned data rates example comes to mind.

Intel claims their Gigabit Ethernet cards move data at 1 Gb/sec, which is exactly correct. Their use of Giga- in that instance is a power of 10, not a power of 2. Modem manufacturers as well, use power of 10 (56 Kbps = 56000).


I didn't know nor noticed that quantities used by the telecom industry are stated in decimal. Thank you for pointing that out, I'm sure it could be important in some cases, even for a programmer.

From a programmer's viewpoint though, even when dealing with networks, everything is dealt with in hex, which implicitly carry with it the binary definitions of the prefixes. (e.g 1000h = 4KB = 1 page). The quantities that provide the best performance for network hardware are invariably powers of 2 (very commonly the value 512).

It seems that the common pattern is to use decimal when the quantities in question are destined for consumers. For anyone that works close to the hardware, decimal is a rarity. It is also clear that the hardware, whether it is a hard drive or a piece of telecom equipment is inherently binary in nature which makes it natural to use hex when programming it. While using decimal is possible, it is awkward.

Quote:

When you request memory on an OS, you don't use a prefix. 8) You request your memory as a decimal number, constant, or multiplied-out number. The malloc() API of C doesn't understand prefixes. So there shouldn't be any confusion there.


True, no prefix is involved in memory allocations but, memory allocations are usually done in hex, which implies the binary prefixes. Less granular allocations are done in pages and, VirtualAllocEx will automatically round the value to a multiple of a page (1000h). Allocating blocks in decimal while sometimes done (mostly by programmers that don't understand how the hardware works) it is suboptimal. Access to a memory block is optimal when the block is aligned on a 32 bit boundary. Alignment occurs naturally when allocations are done in hex and, not so when done in decimal. Most C programmers when using malloc, specify the block size in hex.

Quote:

Now, as to whether HD manufacturers intentionally chose the method to give them larger numbers -- I don't know. That's a matter for debate, and no one from Western Digital, Seagate, or anyone else is ever going to admit to the real story.


I can't prove my suspicions either but, there is little evidence to the contrary.

Peace.
!
changing