Jump to content

Talk:Binary prefix/Archive 4

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1Archive 2Archive 3Archive 4Archive 5Archive 6Archive 10


We against the industry.

I don't understand nor like the "ibibyte" thing.

Aw, c'mon, it's not that hard! The average "filesystems designer", which you claim to be, should have little trouble with this stuff. —Herbee 09:12, August 22, 2005 (UTC)
Just think that all filesystem specifications from the first to the newest use Kilobyte and not Kibibyte, I think, it is a problem ;) Claunia 09:38, 22 August 2005 (UTC)
Granted, but so what? This stuff is still not hard to understand, even if you don't like it. —Herbee 16:06, August 23, 2005 (UTC)

For decades, the industry always used 1000 based for drives capacities while the software used 1024 based multiplies. They always used Kb as 1000000 bytes in drives and Kb as 1048576 bytes in software.

Silly mistake, and that from a "computer science teacher"! The industry have been known to use K to mean either 1000 or 1024, but never a million. —Herbee 09:12, August 22, 2005 (UTC)
You never make a mistake? Simply put Kb when I wanted to say Mb Claunia 09:38, 22 August 2005 (UTC)
Sorry, I just couldn't resist. Yes, I do make mistakes, just as silly as yours, and I do get kicked in the butt for it. It's human nature, I guess. Don't get upset, it's all part of the game. —Herbee 16:06, August 23, 2005 (UTC)

Why should we change the whole world while the people is still using that?

Sigh… —Herbee 09:12, August 22, 2005 (UTC)
You against the world. Wanna be the new Ghandi? Claunia 09:38, 22 August 2005 (UTC)
You lost me there. It's more like innovation alongside conservatism. Not against: the dinosaurs will manage to become extinct all by themselves…;-) —Herbee 16:18, August 23, 2005 (UTC)

Is great that the IEC created new terms to avoid confussion, but it is creating confussion on non technically aware people.

How so? The meaning of K isn't changed: it's just as undefined as ever. Why would people be confused about a new symbol (Ki) with a precise definition? Are they, perhaps, already so full of symbols that they just cannot handle one more? Or is it the case that a certain "systems administrator" doesn't see the difference between K an Ki? —Herbee 09:12, August 22, 2005 (UTC)
THIS is a public encyclopaedia. The intention is the whole public, and not only ones with computer, mathematic and science knowledges. Most people doesn't even known what the unit prefixes mean. Claunia 09:38, 22 August 2005 (UTC)
As you say, the meanings of 'kilobyte' haven't changed; it's still ambiguous. Someone created kibibyte which is well defined, but instead of also creating a kidibyte (or whatever 1000 bytes would be called), they left it as "kilobyte" which is still ambiguous (unless you can convince everyone in the world to stop using kilobyte to mean 1024 bytes, which seems unlikely)
Won't work. Your "kidi" would have exactly the same meaning as "kilo", so what's the point? You don't expect the BIPM to obsolete the "kilo" prefix, do you? Or would you have us use "kilo" to mean 1000, except for bytes, where it's "kidi"? You must by "kiding"! —Herbee 16:32, August 23, 2005 (UTC)
kilobyte = 1024 or 1000 bytes. It's not "exactly equal" to anything. Just look at the size of this talk page, and show me another dictionary definition which is as disputed as this one? Ojw 17:51, 23 August 2005 (UTC)
Billion!  :-) Dpbsmith (talk) 19:32, 23 August 2005 (UTC)
Essentially, solving a dispute by telling half the people that they're wrong didn't work, because it would require those people to agree. Creating two new units (kibibyte and kidibyte, for example) would allow an upgrade path from the disputed term to a clearly-defined term in every instance, but that option wasn't chosen. Ojw 17:12, 22 August 2005 (UTC)
But kilo- already means 1000 in every other instance it is used. Computer terminology is the only aberration. — Omegatron 18:29, August 22, 2005 (UTC)
I bet the IEC had many of the same debates we're now seeing on WP. But I assume this particular issue was handled with the question of "what is most technically correct?". That leads to the question, why would we want 2 prefixes that both mean 1000? Having 2 isn't an "upgrade path" because that implies that the intermediate designation will be discontinued at some future date and then we get to have these debates all over again. "But we've been using 'kidi with a D' bytes for years now. Using 'kilobytes' to mean 1000 bytes is just too confusing." But it's wrong to say that this change implies that half the people are wrong. Only the first two people in history to use kilo to mean 1024 were wrong.--JJLatWiki 15:00, 24 August 2005 (UTC)

Try to explain why they are reading gibibytes in wikipedia while their Windows XP says gigabytes. Was enough difficult to explain they the differences between drive manufacturers gigabytes and real gigabytes.

Please try to sign your comments, User:Claunia. You can do this by typing four tildes (~~~~). —Herbee 09:12, August 22, 2005 (UTC)
Still getting in use with wikicode Claunia 09:38, 22 August 2005 (UTC)
Maybe you should stop thinking Microsoft gigabytes as "real" gigabytes. It will be years before OS's start differentiating between giga and gibi. Obviously it's easier to ignore the issue. Maybe someday some sewer dwelling lawyer will decide to sue Microsoft for falsely claiming their leech of a client's hard drive had only 50 terabytes of free space when it actually had 55 terabytes. When that happens, Microsoft might join the larger community. Until then, explain to people that Windows XP has a flaw in how it calculates a gigabyte that makes it seem like their is less space. Or just keep telling them, like we've all been doing for years and years, that there are 2 common meanings for gigabyte and Windows wastes a lot of space on the hard drive so use those measurements as a rough estimate...--JJLatWiki 15:41, 24 August 2005 (UTC)
IMHO, most (if not all) distros of Linux differentiate between GiB and GB. Nippoo 09:32, 13 July 2006 (UTC)

"Is great that the IEC created new terms to avoid confussion, but it is creating confussion on non technically aware people."

Non-technically aware people don't know what a kilobyte is, either. Most assume it means 1000 bytes, like every other usage of the kilo- prefix they have encountered. — Omegatron 13:24, August 22, 2005 (UTC)

It's not "we against the industry." You can't get more "industry" than the IEC and other organizations which have endorsed the standard. If it must be phrased in terms of a conflict, it is more like "engineers versus marketers." Engineers have a vested interest in making measurements clear. Marketers have a vested interest in making them fuzzy, to make it harder for consumers to compare products. And, once their competition adopts a slightly misleading usage that puts them in a better light, it's a marketer's job to make that their own company follows suit.

To the extent that we take sides, Wikipedia should be on the side of "making things clearer." To the extent that imprecise or commercially loaded language is part of ordinary discourse, we should note that fact and explain it.

We should do whatever is needed to make sure that users understand.

In this case, we have a usage which is officially endorsed by a number of standard organizations, is easily understood, is precise, but is less familiar to most readers. Against it, we have a usage which is imprecise, ambiguous, not used consistently by those who use it, but is more familiar to most readers.

To my mind, on balance we should favors the first usage, primarily because the "common" usage is not easily understood. The average person has no idea whether a gigabyte of RAM stores as much as a gigabyte of disk. When someone uses the word "gigabyte," nobody, no matter how experienced in the industry, really knows whether they mean the binary or the decimal usage. Sure, you can guess at the meaning, but we should not put our readers into a situation where they need to guess. Dpbsmith (talk) 15:58, 22 August 2005 (UTC)

Definitely agree. We've already covered the policy of this hereOmegatron 18:29, August 22, 2005 (UTC)

Incorrect is POV?

Does anyone else interpret the "wrongly...", "even though this is incorrect..." as POV pushing? Don't get me wrong, the MiB/GiB/etc... standards are a welcome change, but it has been the correct usage for decades, and even in the industry, the prefixes are widespread and regarded as correct. StuartH 08:23, 28 October 2005 (UTC)

I don't think so. The meaning of those prefixes is defined by the BIPM, International Bureau of Weights and Standards, and their use in the industry was incorrect according to those definitions. Cans containing 13 ounches coffee are sometimes referred to by their manufacturers as "13-ounce pounds" but I would not expect to be challenged if I were to say "It is incorrect to refer to 13 ounces as a 'pound.'" This is because in the U. S. it is the National Institute of Science and Technology that defines these terms, and Maxwell House is not the NIST.
Furthermore, the confusion is recent because my perception was that during, say, the sixties and seventies, the binary meanings for "kilo" and "mega" were well understood to be sort of a joke. It was only when the PC revolution suddenly expanded the use of computers to a huge nontechnical population that people started to believe that the binary meanings were "real."
What organization would you suggest has the right to define the meanings of the SI prefixes? Is there a serious, widely held viewpoint that says that Microsoft, not the BIPM, is the authority for this?
I am not sure, but I imagine that in countries using the SI, the BIPM is actually the legal authority for the meanings of terms defined in the SI.
Eventually, yes, there is a point when a technical term is misused by marketers and the uninformed to the point where the misuse does become standard, but I don't think this is the case here. Note that as soon as there was perceived to be a problem, the industry's response was to introduce binary prefixes, not to lobby for the BIPM to add the binary meanings for the old terms to the existing standard.
These phrases could be expanded to "even though this is incorrect according to the SI" or "even though this usage is contrary to the international usage that had been in effect for decades," but I think that would be unnecessarily wordy and pedantic. Dpbsmith (talk) 10:42, 28 October 2005 (UTC)
Remeber the ki/Mi/Gi etc prefixes came along after that fact when use of the traditional prefixes in the binary sense was already widespread and you hardy ever seem them used. Plugwash 10:55, 28 October 2005 (UTC)
They came along just about the time there started to be serious confusion about the meaning of the traditional prefixes. Up until the nineties, people who used them in the binary sense understood that that wasn't their correct meaning. The use of the IEC prefixes is indeed not widespread. However, the use of SI prefixes in the binary sense is still, exactly as correct or incorrect as the use of the word "pound" to refer to a 13-ounce can of coffee. Dpbsmith (talk) 12:58, 28 October 2005 (UTC)
The way I see it, though, is that the binary prefixes were "borrowed" from the almost-equivalent SI units because of the need to work in powers of two for addressing and other binary purposes. So they don't really claim to be SI units, just that SI prefixes avoided the need to provide a whole new set of definitions. Maybe they should have used "MiB" and the like from the start, but until now, the context has largely eliminated any ambiguity. But first and foremost, the fact that most of the industry and most consumers still use the old binary prefixes indicates that they are not incorrect to use. StuartH 23:14, 28 October 2005 (UTC)
For decades? I'm sure you meant centuries. Besides the incorrect usage is even a minority in computing industry itself. Most of computing terms use the prefixes correctly. It's the very narrow limited area of terminology that's gone haywire. Delicates 00:39, 29 October 2005 (UTC)
Yes, it is pushing a point of view. The way our language works, someone can coin a new word, or a new prefix. But there is nothing which gives them absolute control in perpetuity over its use. These prefixes aren't "trademarked" or anything like that.
So yes, the originators of the usage can certainly argue that other usage is "incorrect" and contrary to the original intention, but that doesn't really mean that it really fails any linguistic test of correctness.
The BIPM has no legal authority to set any standards; it is just responsible for day-to-day operations. Even the CGPM has no general, plenary legal authority to set general standards on the use of the English language. Anything they say along those lines is more of an advisory nature, not a "legal authority".
Various standards organizations in the computer field did recognize that usage as correct, long before any "kibi-" and the like were ever invented. Gene Nygaard 13:14, 28 October 2005 (UTC)
What's the source of the legal definition of "meter" in the U. S.? Dpbsmith (talk) 14:25, 28 October 2005 (UTC)
Also, which standards organizations were they? Dpbsmith (talk) 15:53, 28 October 2005 (UTC)
I bet if this was about "brontobyte" or "millimicron", no one would object to the use of the word "incorrect", even though they are "widely used", too. — Omegatron 14:18, 28 October 2005 (UTC)
They are? – Smyth\talk 15:51, 28 October 2005 (UTC)
Another reason for using the word "incorrect" is that dictionaries (specifically AHD4 and Merriam-Webster) do not give the binary meanings for kilo-, mega-, giga-, even though the informal binary usage has been current for decades. Dpbsmith (talk) 14:25, 28 October 2005 (UTC)
Then the dictionaries are incomplete and should be updated. :) – Smyth\talk 15:51, 28 October 2005 (UTC)

Moving away "Specific units of IEC 60027-2 A.2"

In the "See also" section there are some tables specific to IEC 60027-2 A.2 that IMHO should be moved to the article IEC 60027, keeping Binary prefix only for general discussion of the topic. Moreover, in those tables I read that, for SI prefixes, the notations "(or 2x)" and "(or KB)" have been added. Since it is stated that the tables refer to IEC recommendations, and not to "common usage" (regardless my POV), I think that notations should be removed, or moved elsewhere, or the captions should be changed. In the current form, it seems that IEC 60027-2 A.2 allows both decimal and binary meanings for SI prefixes, that AFAIK is not correct. SalvoIsaja 09:36, 6 November 2005 (UTC)

Decimal-to-Binary Prefixes and Binary-to-Decimal Prefixes Converter

Hi Guys, Okay I added a convertor to the binary prefixes page as an External Link a while ago and noticed it's removed now.

Why would anyone remove it? Is the contribution a bad thing? I fail to understand why it was removed.

It happened here. I think it was an overzealous vandalism revert. Go ahead an re-add it if you want. Also, please sign your comments like this: --~~~~. --Doradus 11:06, 21 June 2006 (UTC)

Citing of SI prefixes implies "Millibyte" and "Microbyte" and ...

The whole article eschews to mention the other existing prefixes for SI units. All physical units of the SI standard can be scaled up (with kilo, mega, giga...) but ALSO always scaled down (with deci, milli, micro, ...). However for theoretical units like the byte, only the upscaling prefixes are used, because other SI prefixes oh so surprisingly make no sense here.

So when citing it, the article should note that SI may not require, but obviously implies applicability of ALL scaling prefixes. And it should note that therefore the not so widespread use, limited meaningfulness or even non-existence of units like "microbyte" or "decibyte" hint that byte unit prefixes may not be identical in meaning to their SI unit counterparts. (Things like "microbyte" might make sense theoretically for crypthography or computer compression algorithms, but apart from that would be totally idiotic.) --mario

Thank you for your suggestion! When you feel an article needs improvement, please feel free to make whatever changes you feel are needed. Wikipedia is a wiki, so anyone can edit almost any article by simply following the Edit this page link at the top. You don't even need to log in! (Although there are some reasons why you might like to…) The Wikipedia community encourages you to be bold. Don't worry too much about making honest mistakes—they're likely to be found and corrected quickly. If you're not sure how editing works, check out how to edit a page, or use the sandbox to try out your editing skills. New contributors are always welcome. --Doradus 13:08, 7 June 2006 (UTC)
Just wanted to have it discussed first, as it happens to be only my personal nitpick on the topic. (Apart from that I have half a dozen Wikis of my own ;). And I'm rather going to wait for full OpenID support instead of registering yet...
Diminutive prefixes also make sense for data rates. Millibytes per second, and so on. — Omegatron 13:56, 7 June 2006 (UTC)
There are a couple of Google entries for 'mBps' and other such measurement units. While I haven't heard of any of them before (KB/s or MB/s, ok) and I can't rightout imagine any devices where such slow transfer rates could be typically measured, it sounds plausible nevertheless. Only I'd like to differentiate between such compound types (the milli probably stems more from the time aspect of such units) and the "byte" basis type originally discussed in the article. (Though the mB/s argument must then be mentioned as well..) --mario
No one measures in yoctometres or gigafarads, either. All of the units have natural limits to the prefixes that are commonly used with them, but all the prefixes are still valid, for hypothetical examples, etc.
Right, "attometers" or something like that are too small to be measured in practice, and others like an "exajoule" are unlikely to occour in nature. The difference between such physical values and the mathemetical / computer sciences Byte however is, that a millibyte not only cannot be measured, but plainly makes no sense / does not exist. If you think about it, a byte can hold values from 0 to 255, and a bit can old 0 or 1. A supposed "decibyte" however cannot hold information at all (I fail to understand what 80% of a bit could be useful for). OTH I have no reference to back up what is mostly my opionion here. But anyhow the article should note that there is a real discrepancy in applying the same-named SI prefixes to non-physical units like the byte. (This whole articles purpose was to highlight the difference between Base2 and Base10 prefixes/names, not?) --mario
A millibit could easily be a measure of information entropy. Our own article on the topic says English text has an entropy of 1.1 to 1.6 bits per character. I could see someone referring to this as 1100 - 1600 mb. --Doradus 17:11, 16 June 2006 (UTC)
The question is not whether somone could refer to it in this way. Someone could also refer to it as "one trillion, one hundred million to one trillion, six hundred million picobits." Someone could refer to it as 0.0003 to 0.0005 decimal kilodigits per character. Or (I'll leave the math as an exercise for the reader) refer to it in tera-dice-rolls or femto-roulette-wheel-spins. The purpose of this article is not to provide a demonstration of ingenuity, or clever ways to construct hypothetical units that are never used in practice but would have a well-defined meaning if they were. Dpbsmith (talk) 17:30, 16 June 2006 (UTC)
must be applicable for all SI units but make no sense for bits and bytes.
That's not true at all. They make just as much sense as fractional bytes ("4.3 KB/s"). — Omegatron 18:00, 7 June 2006 (UTC)
"4.3 KB/s" is first a compound unit (and not just the prefixed Byte), and second a clear approximation. Of course you can have half a byte (that's a nibble or clear 4 bits). But you seriously can't have 0.3721 of a byte (not with nowadays computers). There needs to be a real example for the existence of fractions of the smallest information container in computer science.. --mario
It takes 20.1 bits to encode a Unicode code point. If you have a random stream of Unicode code points, it will take on average 20.1 bits per character. Information theory frequently takes fractional bits.--Prosfilaes 05:30, 10 June 2006 (UTC)
How could you have millibytes? microbytes? one thousandth of a byte? one millionth of a byte? bit is the smallest unit in computing and a byte consists of 8 bits. one eighth (1/8) of a byte makes sense (= 1 bit). One thousandth of a byte (that is 1/1000 of a byte) doesn't make sense to me. -- McoreD, 2006-06-16T09:03
If I'm giving you 10 apples a day, how many apples do you get in an hour? This isn't difficult math here. — Omegatron 11:33, 16 June 2006 (UTC)
Yes, but nobody but a nerd making a joke would state the answer as "four hundred sixteen and two-thirds milliapples."
Fractional bits do have a meaning and uses. For example, the channel capacity of a noisy channel would likely involve fractional bits. But just because the word "millibits," if it were ever used, would have a well-defined meaning, does not mean that it is a real word in real use. It's rather like "vigintillion." It's in the dictionary, and in theory it has a well-defined meaning, but nobody ever really uses it—except in tables of names of big numbers—because, in the contexts in which it is needed one would just state it in scientific notation.
I am very skeptical that the word "millibit" is really used to any significant extent. I do not think it should be in the article unless someone can cite some good example, say a book on information theory or a research paper on telecommunications or something like that, to show that it is really in reasonably widespread and customary use. Google Books gives five hits on "millibit" but four of them look like scannos to me, and one is apparently an entry in a technical dictionary. Dpbsmith (talk) 14:02, 16 June 2006 (UTC)
Omegatron, I agree there is a mathematical sense to it. But there is no physical sense afaik. 0 or 1 is the smallest unit you can break in machine language. Physically, a millibit would imply a 0 or 1 is made out of 1000 unknown small things. But again, I agree, it mathematically makes sense. :) -- McoreD, 2006-06-20T00:31

Ok, I've totally lost track of what we're arguing here. I think we all agree on the following points:

  • Fractional units of a bit could potentially be used in some fields
  • In practice, such units are never used

Do we agree on this? If so, what's the problem? --Doradus 22:37, 16 June 2006 (UTC)

There isn't one yet. I would have a big problem if, say, someone were to extend the table "Binary prefixes using SI symbols (non-standard, but common)" to include explicit listings of millibits through femtobits. I don't think fractional prefixes need to be mentioned at all. If someone thinks they should be, I would want it limited to a short parenthetical remark like this:
(Quantities of fractions of a bit are encountered in some technical areas, notably information theory. When they are, they are usually expressed numerically using ordinary decimal or scientific notation. Words such as "millibits" are rarely if ever encountered). Dpbsmith (talk) 02:36, 17 June 2006 (UTC)
If there were an actual SI standard it could be argued that the table should list every name endorsed by the standard. However, the SI does not cover units of information, and the use of SI prefixes in this context is, as the table says, "non-standard, but common." I think the article only needs to discuss the words that are in common use. It is significant that the IEC standard does not include any fractional quantities. Dpbsmith (talk) 02:37, 17 June 2006 (UTC)

As I said above, we list yoctometres and gigafarads. They are valid units, even if no one ever uses them. Why should this be any different? If you think we shouldn't list valid units simple because they aren't ever used, then we should be removing them from all unit articles. — Omegatron 20:39, 17 June 2006 (UTC)

This is different, because yoctometres and gigafarads are part of the SI standard. The binary meanings of the SI prefixes are not. They exist only as a matter of customary usage, so only the values which are customarily used are relevant. Dpbsmith (talk) 21:06, 17 June 2006 (UTC)

In case anyone cares, I calculate that one millibit is the quantity of information contained in a statement that you were already 99.93% sure of. For example, the statement "my birthday is not on February 29th" contains approximately one millibit of information. --Doradus 17:51, 19 June 2006 (UTC)

Yes, but which did you actually mean when you wrote the sentence:
Humour value aside, there might be some practical applications of millibits or even microbits (per unit time) on channels with extremely narrow bandwidths or very poor signal-to-noise ratios.
Atlant 22:32, 19 June 2006 (UTC)
The issue is not whether there is any practical application for the concept of fractional bits. There is. Everyone agrees to that. The question is whether or not the term "millibit" is widely and customarily used to measure fractional-bit quantities. It is not. Fractional-bit quantities are customarily referred to by using the word "bit" together with ordinary numeric notation. The proof that it is not a term in common use is that nobody knows whether a millibit customarily refers to 10-3 or 2-10 bits... because it is not customarily used at all.
The reason why a binary understanding of the decimal prefixes arose, and why binary prefixes were coined was was out of a practical need to name quantities of bits which, because of the physical structure of digital circuitry, naturally came in sizes that are powers of 2.
Although fractional bits are useful, I cannot come up with any explanation, however contrived, that would say that multiples of 1/1024 bit occur more often in information theory than multiples of 1/1000 bit. Dpbsmith (talk) 22:44, 19 June 2006 (UTC)
Raisbeck, "Information Theory," MIT Press, 1963, p. 18 refers to "225.7 bits;" on p. 19, "4.76 bits." On p. 25, we find ".72 bit per letter" and ".96 bit per second." On p. 48 he estimates that a chess master playing simultaneous blindfold chess is "taking in .2 bit per second." Dpbsmith (talk) 22:50, 19 June 2006 (UTC)

Let's stay focused here folks. As I said above, we all agree that:

  • Fractional prefixes on bits may be useful in theory
  • In practice, they are never used

So I ask again, what are we arguing? Is the issue whether or not fractional units of a bit should be mentioned in the article? Personally, I don't see any reason we couldn't include the above two points, but I think they are currently much too prominent in the intro, so I'm moving them down to somewhere more suitable. --Doradus 11:05, 20 June 2006 (UTC)

It's also about the fact that we list ridiculous prefixes for other units, that don't even make any physical sense. I like your change, though. — Omegatron 13:33, 20 June 2006 (UTC)
I'd prefer to characterize it this way:
  • Fractional bits are used and are useful in some area such as information and communication theory.
  • It is not clear whether or not SI prefixes are useful, even hypothetically.
  • In practice, they are virtually never used.
  • Even if the term "millibit" had much use, nobody has articulated any reason why one would want to refer to multiples of 1/1024 bit rather than 1/1000 bit.
We are arguing about: what if anything be said about fractional prefixes in the article.
I like Doradus's present wording. I think it's just right. Dpbsmith (talk) 14:03, 20 June 2006 (UTC)