Jump to content

Talk:0.999.../Arguments/Archive 8

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 5Archive 6Archive 7Archive 8Archive 9Archive 10Archive 12

Answer this very simple question

1 is a rational number, .999...... is not a rational number. By definition an irrational number is a number which cannot be expressed as a fraction m/n, where m and n are integers, with n non-zero. However saying that .999... equals 1 would mean that it would have to be both irrational and rational at the same time (and vice versa) which is, clearly, nonsense.

And don't tell me that .999... is not an irrational number because it equals one. That's just going in circles. What I want you to do is to prove that .999 is not irrational without jumping to conclusion that, it is indeed 1 without skipping any steps in between.

And by the way, I realize that as long as .999... is being used as 1 it will, indeed mean 1. But that in itself does not necessarily mean that it is equal to 1.

Oh, and by the way, does this mean that 1.999.... equals 2 and so on?

—The preceding unsigned comment was added by 205.161.125.254 (talk) 14:51, August 21, 2007 (UTC)

The proof that 0.999... = 1 does not rely on 0.999... being rational. Therefore it's not "going in circles" to infer that 0.999... must be rational once we've proved that 0.999... = 1.
And yes, 1.999... = 2, etc.
Oli Filth 15:05, 21 August 2007 (UTC)
Every repeating decimal is rational. So 0.999... is rational regardless of whether you believe it equals one. It also happens to equal one, of course, but there is no need to invoke this fact to convince oneself that it is rational. Silly rabbit 15:16, 21 August 2007 (UTC)


OK, here's what I don't understand. I know that by this logic 1.999... equals to 2 and so on. However, this cannot be applied to, say, .2222 or other such repeating decimal. Why give special treatment of... how should I put this... conversion (I don't want to say rounding up) to the first kind of repeating decimal and not the other?

There is no "special treatment". 0.999... = 9/9 ( = 1), just as 0.222... = 2/9. Oli Filth 15:32, 21 August 2007 (UTC)
All repeating decimals are rationals - they all can be expressed as fractions. And some rationals are also naturals, in other words, some fractions can be expressed as integers.
This is not "conversion", in the same sense that 10/2 = 5 is not a "conversion" (neither is rounding up). They are the same number, we just change the notation.
It is not special treatment, it is a natural propriety of our way to represent numbers, as the one that if you sum all digits of a number that is multiple of three, you will have another multiple of three. Will it doesn't work for 7 or 29? Do you think this is unfair? wildie · wilđ di¢e · wilł die 16:08, 21 August 2007 (UTC)

OK, last question: is it possible to divide the pie into three equal pieces without having anything remaining? This may not seem like it ties into this article's subject but it does for me. Btw, I anticipate your answer to be yes. Thanks a lot. —The preceding unsigned comment was added by 205.161.125.254 (talk)

Yes, it is possible - it's just something hard to do with precision. :^) wildie · wilđ di¢e · wilł die 16:08, 21 August 2007 (UTC)

Thank you for your responses guys. They helped a lot. I realize that saying that .9999 = 1 shouldn't be more contivercial than saying 1/3 = .333333333. Part of the problem, is the fact that many of us intuitively try to invision this as a progression. What happens as we add each sucessive 9 to .999.... it gets closer to 1, add another nine and it gets even closer still and so on. One thinks that even an infinite number of nines isn't enough to get to one and doesn't know how to account for the difference. —The preceding unsigned comment was added by 205.161.125.254 (talk) 16:38, August 21, 2007 (UTC)

Please, don't forget the "..." in the end of the number. .9999 =/= .9999... ;^)
You can see in the article, calculus do prooves that adding .9 + .09 + 0.009 + ... to the infinite does equals 1. I know that's not intuitive, but neither is saying that the Earth is moving.
In other words, a "infinite" number of nines is precisely what you need to reach one. wildie · wilđ di¢e · wilł die 16:55, 21 August 2007 (UTC)

In the real world, .999 cannot be equal to 1.000. If you drill a hole in a piece of steel that is .999" and try to insert a steel rod of 1.000" diameter, it will not fit (assuming they are the same temperature). They are not equal. If you place a object that weighs .999 ounces on one side of a scale, and an object that weighs 1.000 ounce on the other, it will tip to the heavier. They are not equal. —Preceding unsigned comment added by 162.121.247.66 (talk) 21:49, 4 September 2007 (UTC)

You are absolutely correct. Oli Filth 21:54, 4 September 2007 (UTC)
And the article doesn't claim that 0.999 equals 1.000; but 0.999... equals 1. For obvious reasons the "real world" tends not to use the 0.999... notation. Thus, the article starts with "In mathematics, ..." Yours, Huon 22:24, 4 September 2007 (UTC)
Also, in the "real world", you don't even need infinitely many 9's; an object weighing 0.9999999999 ounces will balance perfectly with an object weighing 1 ounce on a standard scale. -- Meni Rosenfeld (talk) 22:30, 4 September 2007 (UTC)

I guess it depends on what you mean by 'standard scale'. You must admit that 0.9999999999 ounces is less than 1 ounce. If your 'standard scale' balances out perfectly, your scale is not accurate. They are not equal. —Preceding unsigned comment added by 162.121.247.66 (talk) 19:27, 6 September 2007 (UTC)

There are very few scales accurate enough to differentiate between the two. But that's not the point. The point is that .99999999 < 1 is true. But we're not talking about .9999999999. We're talking about .999999999... Gscshoyru 19:30, 6 September 2007 (UTC)

If there are infinite numbers between .9999999 and 1, they are infinitely not equal, they are infinitely far apart. —Preceding unsigned comment added by 162.121.247.66 (talk) 19:37, 6 September 2007 (UTC)

Actually, there aren't any infinite numbers between 0.9999999 and 1. But I can tell that you meant "there are infinitely many numbers between 0.9999999 and 1". However, I have no idea what you mean by "they are infinitely not equal, they are infinitely far apart". Also, please sign your posts by typing ~~~~ (4 tildes). -- Meni Rosenfeld (talk) 19:43, 6 September 2007 (UTC)
Well, other than the fact that they are infinitely far apart, everything you say is true. To get how far apart any two numbers are, just take the absolute value of the difference. But so? What's your point? Gscshoyru 19:43, 6 September 2007 (UTC)

Sorry about not signing my posts, I will use 162.121.247.66 19:54, 6 September 2007 (UTC) from now on. My point is simply that .999 is not equal to 1. 162.121.247.66 19:54, 6 September 2007 (UTC)

As said before, 0.999 is indeed not equal to 1, but 0.999... is equal to 1. -- Meni Rosenfeld (talk) 19:55, 6 September 2007 (UTC)

I understand what you mean by .999... Adding the three dots means it is infinite. I still think it is always "one value behind" no matter how far you carry it out. 162.121.247.66 20:01, 6 September 2007 (UTC)

Unfortunately, that is incorrect, and you should take a closer look at 0.999... to understand why. -- Meni Rosenfeld (talk) 20:03, 6 September 2007 (UTC)

I get it now. If I divide something into thirds, each of those pieces will be .333... That means that .333... X 3 = .999... but I started out with a whole something. That means .999... = 1. It just seems non-intuative at first. I conceed. 162.121.247.66 20:37, 6 September 2007 (UTC)

You should see the ... as a limit. 0.999... = lim [x->0]: 1-x 84.87.183.181 (talk) 06:34, 7 May 2008 (UTC)

Anthony's Arguments

Removed. There is no reason for Wikipedia to be providing a platform for Anthony; it's time for him to go elsewhere. I'll be removing all his nonsense as it appears. --jpgordon∇∆∇∆ 15:40, 23 October 2007 (UTC)

  • I've semi-protected the page for a few hours; he seems to be enjoying himself too much. Go find yourself another playground, Anthony; you're wasting everyone's time here. --jpgordon∇∆∇∆ 20:38, 23 October 2007 (UTC)
    • Boy, we're dealing a persistent one here. If he can't convince people with his words and exclamation points, he seems to think he can convince them with vandalism. That doesn't work either. Page protected for another brief while. --jpgordon∇∆∇∆ 20:30, 24 October 2007 (UTC)

If a form of number cannot be written with a finite number of digits than it is not a number

It is merely a process. For example, .333... is merely an approximation of 1/3. I know that a number is an abstract idea in itself but it is strange to equate a number like 5 to something that is growing infinitely. I realize that .999... has a limit but I hold that the limit is bigger than the number itself as .999... would never reach it, not even with an infinite number of digits (this should be possible to prove through induction). —Preceding unsigned comment added by 205.161.125.254 (talk) 14:02, 4 September 2007 (UTC)

So pi, which can't be written in a finite number of digits, no matter the base that you're in, is also not a number? .333... does in fact equal 1/3, and .999... does in fact equal 1. Induction only works on finite numbers -- it says, if something is true for 1 and true for n implies true for n+1, then it's true for all n -- but infinity is not an n, it is not a number at all. The number is not growing infinitely-- rather, it is growing closer and closer to 1 -- and the limit as the number of 9's goes to infinity is 1. Gscshoyru 14:09, 4 September 2007 (UTC)
So you are saying that induction only works on countable numbers. OK, so what you are saying is that .999... is NOT countable, and, therefore has an infinite nuber of 9s. If I am not mistaken this seems to contradict this stament below: I have to comment that the "infinity" of the number of 3's in the decimal expansion of 1/3 is not the same as the "infinity" of the number of points on a line. The important distinction is that they have different cardinalities; the former is countable (cardinality ) and the latter has the cardinality of the continuum, .
So if it is not countable how can it have a coubtable cardinality? I was also confused that since inifnity was not a number how it was possible for a number like .999... have a non-number part. —Preceding unsigned comment added by 205.161.125.254 (talk) 14:53, 4 September 2007 (UTC)
Ordinary induction works for natural numbers, which are finite. is countable but it is not finite. -- Meni Rosenfeld (talk) 15:07, 4 September 2007 (UTC)
Anon, note that the 0.999... representation implies a quantity of 9's that is infinite with cardinality , i.e., there countably infinite 9's in it. But the 0.999... number is part of the reals, and the quantity of numbers in the reals is infinite with cardinality , i.e, there are uncountably infinite numbers in the reals. wildie · wilđ di¢e · wilł die 15:49, 4 September 2007 (UTC)
Just a tiny little note here: The use of unsubscripted to mean "the cardinality of the continuum" is very rare. I do recognize it, because Abraham Fraenkel used that notation and I looked up his book once. But don't count on it being recognized by the mathematical community at large. The clear, unambiguous notation is ; alternatives that have reasonably wide currency are and . --Trovatore 20:11, 6 September 2007 (UTC)
This is exactly why I have added a verbal description. But you're right, simply writing is better. -- Meni Rosenfeld (talk) 20:49, 6 September 2007 (UTC)

(edit conflict - I'm talking to the anon)

Sorry if YOU think so, but it is defined otherwise.
.333... is not a process. It is static, it have a precise and definite number of digits - exactly infinite '3' after the decimal point.
It is the same "infinite number of things" used to say that there are infinite points in a line, for example. Do you think a line is a "process"? I don't.
.333... is a valid form to write down that number. 1/3 is another - someone could say "it is not a number, it is a division!". A division is a process; while a static number (even with some part of it resumed out for economy of space) is not. wildie · wilđ di¢e · wilł die 14:11, 4 September 2007 (UTC)
What I'm saying it, if "processes" are worse than "number", as you say, .333... IS the number, as 1/3 is only a indirect way to represent this. 1/3 is a division, .333... is precisely the result of that division, as you should have learned years ago. wildie · wilđ di¢e · wilł die 14:17, 4 September 2007 (UTC)
I have to comment that the "infinity" of the number of 3's in the decimal expansion of 1/3 is not the same as the "infinity" of the number of points on a line. The important distinction is that they have different cardinalities; the former is countable (cardinality ) and the latter has the cardinality of the continuum, .
Anyway, in some sense of the word, both 1/3 and 0.333... are "processes" (one is the process of taking 1 and dividing it by 3, the other is evaluating the series ), but both end up with the same number, known as "one third". -- Meni Rosenfeld (talk) 14:30, 4 September 2007 (UTC)
Ops, sorry for the confusion, Meni. wildie · wilđ di¢e · wilł die 15:49, 4 September 2007 (UTC)
Meni, I'd have to disagree. 1/3 and 0.333... are the results of the processes you describe, but they themselves are static numbers, not processes. Calbaer 17:29, 4 September 2007 (UTC)
I haven't phrased that as well as I had hoped. I guess what I should have said is that using the notations 1/3 and 0.333... alludes to the respective processes (this is the "sense of the word" that I intended), but themselves only denote the result of those processes. Does that make more sense? -- Meni Rosenfeld (talk) 17:48, 4 September 2007 (UTC)

Sure it's growing closer to one... infinitely closer... and never gets there. I would really appreciate it if you could prove it otherwise. We all agrre that "..." means that 9 is repeating infinitely. To me that signals that it is a process (or, an approximation).

As for a pi, it is defined as an irrational (nonrepeating) number. If you want to compare .999... to an inrrational number like pi than by all means go ahead (as it would mean that .999... cannot equal one as it would not be possible to write is as 1/1 by definition of an irrational number).

The terms of the sequence 0.9, 0.99, 0.999, ... grow as you advance along the series. You can say, that "advancing along the series", or evaluating its limit, are processes. However, the number 0.999... isn't growing anywhere. It is, by definition, the limit of this particular sequence. This limit is 1.
The article is full of proofs that 0.999... is, in fact, equal to 1. -- Meni Rosenfeld (talk) 14:36, 4 September 2007 (UTC)

Well, to me the difference between 1/3 and .333... is that while both are processes the former is not an aproximation as it can be written with a finite number of digits. It can also be be used as a unit in calculations and because it's result is stated implicitly rather than explicitly it doesn't have a problem of having and inifinte number of digits in its explicit result. P.S. Thanks guys for quick responses and sorry for so many questions. Hopefully, some of them might be helpful to others as well.

Both representations are defined as the same thing.
It is true by convention but it doesn't seem right.
The string of symbols 0.333... has a finite number of characters - I counted eight, how many you see?
I'm talking about digits and not characters (which are infinite because the number .999... has repeating decimals).
What 0.333... represents, or more precisely, what "..." implies is that the pattern repeats infinitely. And a infinite number of "3's" after a 0._ is, precisely (with no "aproximation"), the result of the division of 1 by 3. wildie · wilđ di¢e · wilł die 15:49, 4 September 2007 (UTC)
To me infinite is not a number. Seems like infinite = desperation. A desperation to approximate what 1/3 really is. —Preceding unsigned comment added by 205.161.125.254 (talkcontribs) 17:16, September 4, 2007
Nobody is saying infinite is a "number". Infinite, here, is a quantity.
We are not desperated to define what 1/3 "really" is. We already know what it is - it is the number one-third. We are just using different symbols and representations for it. One of this representations is 1/3. Another is 0.333... and both were created using pre-defined rules, so if you disagree with it, your problem in realty is with these rules.
1/3 is "really" nothing more than a 1, a / and a 3. It is not the number it represents. Numbers are concepts, not objects, restricting the ways we can use to represent that concept by the number of symbols needed is nonsense for me. wildie · wilđ di¢e · wilł die 17:36, 4 September 2007 (UTC)
Though it's at least vaguely interesting to contemplate the properties of the set of reals that can be represented exactly in a given base. --jpgordon∇∆∇∆ 17:45, 4 September 2007 (UTC)
[edit conflict] "Infinity" is not a real number, but be careful not to state "infinite = desperation" out loud in a room full of mathematicians, unless you want objects thrown at you. Virtually everything that is done in mathematics is related in one way or another to infinity (infinite sets etc.).
More to the point, just because infinity is not a real number doesn't mean it is not a concept we can employ while investigating real numbers. -- Meni Rosenfeld (talk) 17:48, 4 September 2007 (UTC)
I understand exactly what you are trying to say though I wasn't so much talking about 1/3 by itself but the result of it which is (or may not be ;) ) equal to .333... And while I get the whole the symbols analogy you present here I would argue that while digits are symbols they abstractly they represent a slightly different concept.
Yes, while I understand that a number can be written in any number (pardom the pun) of ways, I have a bit of a problem stating that a number can be presicely written with an infinite number of digits. I know you can argue that any number can be written with infinite number of zeroes but in this case the zeroes can be removed without any affect on the value. What I am trying to say is that, I have a slight doubt that states that a form of a number that doesn't contain a finite number of of digite is not a precise number. Does that make sense? —Preceding unsigned comment added by 205.161.125.254 (talk) 17:54, 4 September 2007 (UTC)
This may have made sense if decimal expansions didn't have a well-defined, rigorous meaning. We could talk about this all day but in the end it boils down to understanding the definitions (and the theorems). You should take a look at real number, maybe at decimal representation (though I don't think this one is written in a very helpful way), and then try to read 0.999... more carefully. Maybe you will then gain a better understanding. -- Meni Rosenfeld (talk) 18:05, 4 September 2007 (UTC)


(edit conflict again) Now that you concur that 1/3 and 0.333... are just symbols, I just ask: how you know what a symbol represent? You need a definition. And the definition used for decimal numbers and fractions implies that 1/3 and 0.333... are both representations of the same number.
To me that definition lacks consitensy with the rest of the number theory.
As a sidenote, I must say it again: 0.333... is not written if a infinite number of digits. It uses just eight.
I won't argue about semantics but .333... is understood to represent a number that has an infinite number of digits, hence the... That what I meant by infinite number of digits.
And finally, no, "it" still don't make sense. In aproximations, more precise numbers uses MORE digits - you determine the precision based in the number of relevant digits. So if you already uses infinite digits, your precision is absolute. wildie · wilđ di¢e · wilł die 18:16, 4 September 2007 (UTC)
OK, let me ask you a question, why is it that when we divide one by three we write the answer as .333...? Why do we need the "..."? Do we not need it because having a finite number of 9s is not enough to give the exact while thus requiring to need exactly an infinite number of 3s? But doesn't it also mean that there is no last 9? How is it, that .333... is a PRECISE answer when the last digit 3 never comes?
Yes, you could say that the "..." is required because any finite number of 3's is not enough. If we try to multiply the result, 0.333..., back by 3, we get 0.999..., which, indeed, has no last 9, but it has infinitely many 9's, which is enough to amount to 1. -- Meni Rosenfeld (talk) 18:44, 4 September 2007 (UTC)
Which returns me to the one and only problem I have: how can we talk about a precise value of a number which has "no last 9"(which, to me, is a very definition of something that is inprecise). —Preceding unsigned comment added by 205.161.125.254 (talk) 18:55, 4 September 2007 (UTC)
Using Limits. Gscshoyru 18:57, 4 September 2007 (UTC)
Just because a graph has a bound or a limit doesn't mean it will ever reach it. See (a special case of an) Asymptote. That's what a .999... is to me - an asymptote that comes close to but never reaches one.
The two are in the way you describe, functionally identical... and both in the case of an asymptote and in this case, the limit of the function is the asymptote/1. Saying it will never reach it is true -- but that only deals with finite amounts, and we're dealing with infinity. If infinity was a number, and you could plug it into an asymptotic function, the result you would get would be the value of the asymptote. The same holds true for .999... Gscshoyru 19:14, 4 September 2007 (UTC)
By definition, an asymptote is the limit. And as already mentioned above, 0.999... is not defined as the process (i.e. it is not the sequence 0.9, 0.99, 0.999, etc.); it is defined as the limit. Oli Filth 19:12, 4 September 2007 (UTC)
Put another way: the function has the asymptote , just as the sequence has a limit 1. for any real x, just as for any natural n. just as . And this last limit just happens to be denoted by the symbol 0.999... . -- Meni Rosenfeld (talk) 19:16, 4 September 2007 (UTC)
Nonsense.
You are applying the limitations of the human being to the representations it creates for his concepts.
Everyone knows you can't write down untill the last digit.
That too is nonsense. This is has nothing to do with human limitations. It's not about a human being not being able to write the last digit but rather that there IS no last digit.
And this don't change the fact that, in our number system, a infinite string of 3 after a decimal point have a meaning.
And I am not arguing that it doesn't have a meaning (it does and I'm very close to believing that it is equal to one atm), but, rather I have doubts about it's precise meaning.


If you prefer a system where we don't have a meaning to these type of things, I'm really sorry, but your problem is with the world, guess who wins?
The "guess who wins" comment is really uncalled for. I am not putting myself against anybody here, just listing my arguments to gain a better understanding of the subject. I never claimed my opinion to be the absolute truth. I appreciate your help but there is no need for that.
And you're starting to mix 9's and 3's. Confused already? wildie · wilđ di¢e · wilł die 18:53, 4 September 2007 (UTC)
Again, your tone is uncalled for. I did mix those up (corrected now) but it wasn't because I didn't understand the difference between the two.
Why should it have a last 9? Does the fact that there is no "last natural number" stop us from discussing , the set of all natural numbers? -- Meni Rosenfeld (talk) 18:59, 4 September 2007 (UTC)
Let us all calm down and keep Wikipedia:Civility in mind. While we're at it, I wish to remind everyone (including anonymous editors) to sign their posts by typing ~~~~ (4 tildes) at the end. Also, it is better not to post replies in the middle of others' posts. -- Meni Rosenfeld (talk) 19:11, 4 September 2007 (UTC)
I will, I just need to register first. And I want to make it clear that I appreciate wildie's replies and am not trying to upset him :). Thanks guys, consider me convinced :).
As I attempted to emphasize, signing is effective and recommended even if you are not registered. Your IP address will be displayed, which helps identify which posts are yours and the time in which you made them. In case you are concerned about privacy, note that your IP address is already available to anyone, since you have made edits. -- Meni Rosenfeld (talk) 19:23, 4 September 2007 (UTC)
Thanks, that's what I'll do from now on. 205.161.125.254 19:37, 4 September 2007 (UTC)owsk
Of course, you are also welcome to create an account :). -- Meni Rosenfeld (talk) 19:39, 4 September 2007 (UTC)
Sorry for the tone, but anon's main problem here were using his personal concepts a lot and mixing they with the stablished and standard ones, so I tried to show him that some things he claimed make sense just for himself.
I don't have a problem. Fully explaining my ideas seemed like the most logical way to check their correctness and it proved to be right at the end.
And all real's math would be different to make him "right", not just recurring decimals. This would be a bad move.
Not sure what bad move you are reffering to here. Let's just drop the whole issue, especially now that I no longer disagree about the main topic of this discussion.
As I said in another discuss here, what is intuitive is not always right - saying that the Earth is moving is counter-intuitive.
I agree that "what" is intuitive is not always right (the contrapositive of that statement is true also). I don't remember saying that something was counterintuitive but it's a good point nevertheless.
And to end this: there is a "final 9" (sort of) in 0.999... - in the position ω. wildie · wilđ di¢e · wilł die 19:44, 4 September 2007 (UTC)
That is not true. 0.999... does not have a 9 in position ω (nor does it have one in position ω+1, etc.). -- Meni Rosenfeld (talk) 19:55, 4 September 2007 (UTC)
I second that. Looks like you were using ω as a placeholder for infinity anyway.
Actually, he was referring to the ordinal number ω, which is a very specific kind of infinity. -- Meni Rosenfeld (talk) 20:29, 4 September 2007 (UTC)

The world would be a better place

If the matematicians stop with nonsense like this - clearly 0.999… is not the same as 1 - the simple fact 0.999… is infinite and 1 isn't should tell them something. --IceHunter 12:58, 5 October 2007 (UTC)

In a perfect world, everyone would understand everything. Unfortunately, we are doomed to live in a world where some people know what they are talking about (e.g., mathemticians when discussing mathematics), and some don't (e.g., the general public). -- Meni Rosenfeld (talk) 15:24, 5 October 2007 (UTC)
To avoid being accused of a style over substance fallacy, I will also comment (as I have done all too often in the past) that the symbols 1 and 0.999... are not numbers themselves, but rather representations of numbers, interpreted by specific rules. These rules dictate that both are a representation of the same number, and it doesn't matter if one looks to you more infinite than the other (furthermore, "1" can be seen as shorthand for 1.000..., which is also "infinite"). -- Meni Rosenfeld (talk) 15:50, 5 October 2007 (UTC)
Wouldn't the "proving" of .999~ equaling 1 also prove that the concepts of mathematics being flawed. 71.74.154.252 03:08, 9 October 2007 (UTC)
No it would not...mathematics WOULD be flawed though if .999... equaled something besides 1. People seem to have some sort of idea that there is a 1-1 correspondence between numbers and ways or writing down those numbers. This is not the case. Any number ("one" for instance) is really an abstract concept. We have various ways of representing that abstract concept. A person can express this concept by writing three particular characters from the Roman alphabet in a certain order ("o" followed by "n" followed by "e"). Another way is via the Arabic numeral "1." Both sets of symbols are different, yet they represent the same number. Given this, one must ask what value do the symbols ".999..." represent. One needs to know a little about limits in order to see this, but those symbols are another way of representing a particular infinite series, and that series converges to 1. When one considers it this way, there is no need to worry about the nonsense of some "last 9". Mickeyg13 05:15, 9 October 2007 (UTC)

A suppose a more through explanation of what Mickeyg13 was getting at would be answering the question: How can 1=0.999... when 1 is an integer and 0.999... is not. The resoltion comes through the concept of equivalent forms. I can make 3 appear not to be a member of the set of integers by writing it as 9/3 or 3.0000 or even 3.000... but only 3 is the integer, the others are equivalent unsimplifed versions of the integer, not integers themselves. The key is that though all of them must represent the same quantity, only one of the forms fits an additional set of rules. Which causes those others forms to be equivalent to 3 and for 3 to generally be considered to be the simplest form. By the way, the mathematician comment is very un-Wikipedia:Civility of you to say. With out mathematicians the vast majority of the technology you enjoy including especially the computer you made that post with would not exist. You should be thankful there are people who will devote their lives to study something that is so complicated and difficult at times. A math-wiki 01:35, 11 October 2007 (UTC)

You have made an incorrect assumption. 0.999... is an integer. So are 3/3, (-1)^2, 1/2 + 1/2 and . These things that I have written are all different representations of the number that is also represented by the numeral "1". Some of these representations are not readily defined in the integers. For example, 1/2 is not an integer and is only meaningful once we begin to consider the rational numbers, of which the integers form a subset. Regardless, there is only one "one". It is an integer, and no matter how we represent it, it is still an integer. Maelin (Talk | Contribs) 12:17, 11 October 2007 (UTC)
Off topic, but I'm not sure I agree that 'there is only one "one"'. A fairly standard construction of real numbers is starting with natural numbers being finite ordinals, continuing with defining integers as equivalence classes of ordered pairs of natural numbers, defining rational numbers as equivalence classes of ordered pairs of integers, and concluding with defining real numbers as equivalence classes of Cauchy sequences of rational numbers. In this light, we have the natural number 1 which is the set ; we have the integer 1 which is the set ; we have the rational 1 which is the set , and we have the real 1 which I won't bother to write down explicitly. There is then the question of what we do with all the sets used in the construction of the reals; A possibility is to discard them, discuss only the set as defined, and agree that will be used to denote "the set of real numbers which correspond to integers", and so on.
Of course, this is all irrlevant, since there is no question that both 1 and 0.999..., viewed as real numbers, are both "natural numbers" (in the sense of "a real number which corresponds to an ordinal") as well as "integers" - the same integer, of course. -- Meni Rosenfeld (talk) 12:42, 11 October 2007 (UTC)
Meni, you're talking about models of 1 rather than the "common" 1, whatever that may be; the Platonic object if one is so inclined/brash. Consider: The Reals are the unique ordered complete field etc, but there are many different models of the reals; some of which contain no non-Lebesgue measurable subsets, for instance. You ask a guy on the street what 1 is, he's note going to describe an equivalence class of ratios or converging sequences. These are formalisations mathematicians invent to model the world. To say I have the set containing the empty set worth of spoons is just strange. Endomorphic 14:26, 28 October 2007 (UTC)
But we are talking about mathematics here. While it may be true that there is only one "intuitive" 1, which is what people (mathematicians included) think about when they speak about 1, a formal mathematical statement can only be made about some specific construction of it. -- Meni Rosenfeld (talk) 15:05, 28 October 2007 (UTC)

The world would be a better place... if people wouldn't argue over stupid notation. Set of real numbers is uncountable, yet we have no notation to describe more than finite number of them. Even all integers can't be described in decimal notation, for example

.

Exponent notation is so much more powerful than decimals... Tlepp 17:33, 6 November 2007 (UTC)

Hm? That number can be expressed in decimal notation easily. It's just tedious. --jpgordon∇∆∇∆ 17:38, 6 November 2007 (UTC)
Number of atoms in the observable universe: ~, see Atom. Of course decimal notation exists in some other larger universes, but it won't help us. Tlepp 18:17, 6 November 2007 (UTC)
So? Who said you have to use anything as grossly large as atoms? --jpgordon∇∆∇∆ 21:20, 6 November 2007 (UTC)

Is it possible to write the exact value of pi, , in decimal notation (in any universe)? If the answer is positive, then how many different ways are there to write number 1 in this notation? Is '0.999...' written with '9's? Is the 3 dots '...' notation just a shorthand notation for the real decimal notation? Tlepp 19:23, 6 November 2007 (UTC)

If by "decimal notation" you mean "base-10", no. On the other hand, it's 10 in base pi. --jpgordon∇∆∇∆ 21:20, 6 November 2007 (UTC)
Tlepp, perhaps you can clarify what is your point in this?
Maybe it will help to distinguish a decimal expansion, which is a function from to which satisfies certain properties, and what you call "decimal notation", which is a string of certain ASCII symbols which is supposed to represent such an expansion. Now, there is no finite string of ASCII characters which represents the decimal expansion of π, but it still has a decimal expansion. Likewise, there is no finite string that represents the expansion of all 9's in the "normal" way, but the string "0.999..." is an accepted notation used to represent this expansion (or, in fact, the number represented by this expansion, 1). -- Meni Rosenfeld (talk) 22:28, 6 November 2007 (UTC)

2 + 2 = 3.999...

( Warning: Off beat humor alert ) I'm still trying to grasp the idea of infinitesimal equivalence but has anyone considered the philosophical repercussions of .999... = 1 ? When one applies this line of reasoning against the time old statement 2+2=4 the result could be total anarchy or at very least multiple singularities. 99.229.239.0 02:03, 2 November 2007 (UTC)

  • Well, yes. There are an infinite number of such singularities; for every number in a given base that has a finite representation with last digit N, there exists a second representation replacing N by (N-1) and following it with an infinite number of nines. --jpgordon∇∆∇∆ 02:41, 2 November 2007 (UTC)
  • 2 + 2 = 3.999... is no more "anarchistic" than 2 + 2 = 1 + 3. Both statements are true. —Caesura(t) 02:54, 2 November 2007 (UTC)
What "infinitesimal equivalence" mean?
By the way, what Caesura said leads to this:
  • 2 + 2 = 1 + 3
  • 1 = 0.9 + 0.09 + 0.009 + ... = 0.999...
  • So, 2 + 2 = 3 + 0.9 + 0.09 + 0.009 + ...
  • So, 2 + 2 = 3.999...
I see no "philosophical repercussions" in there. Maybe you're thinking 4 and 3.999... as different numbers, while they are just different representations of the same number. 200.255.9.38 11:23, 5 November 2007 (UTC)

Proofs? Axioms?

I personally think that .9-repeating is equal to one, but I don't like some of the proofs given in the article. Maybe someone could explain to me a little why they're there?

The algebraic proof is my least favorite, especially if it's purpose is to try to convince the skeptics. If someone thinks there's an infinitesimal difference between .9999... and 1, then they'll think there's a difference between .333... and 1/3 as well, and that .111... doesn't equal 1/9 exactly. .333... X 3 would equal .999... but wouldn't necessarily equal 1.

For the digit manipulation proof, take it from the side of a non-believer. When you multiply .999... by 10, you get a little less than 9.999... because 9+.999... would give you one more nine than 10*.999... with the same number of terms worked out for each "infinite" series. 9.999... - .999... therefore yields a little less than 9, or 8.999..., which divided by 9 comes out at .9 repeating. Maybe it's not clear, but intuitively it pretty much makes sense.

Most of the other proofs involve buried assumptions that infinitesimal differences don't count. That's really the essential question, of infinitesimals count. Unfortunately for the non-believer, standard mathematics theory includes the "archimedean property" as a property of the real numbers. If it's more or less an axiom of the accepted mathematical system, doesn't that turn the argument into one of whether or not infinitesimal differences SHOULD count, whether or not the Archimedean property SHOULD describe the set of rational numbers. I personally think it should, because it makes calculus and repeating decimals SOOO much easier to work with. But all the proofs rely on the same principle, and only vary in complication. Doesn't it make more sense to skip past the proofs which only make the argument more obscure, straight to the fundamental issue of how much mathematics needs to parallel intuition?

I've tried explaining this to people before, and they always get caught up in the proofs. I care more about the "archimedean property" bit. Does it make sense at all? Thanks in advance. Timeeeee (talk) 04:29, 26 November 2007 (UTC)

The rational numbers provably have the archimedian property (given a nonzero rational p/q, multiplying by q produces a rational whose magnitude is at least |p| >= 1). So we don't have any other choice. The archimedian property for real numbers follows from the least upper bound property. The only way out would be to deny the least upper bound property. Of course, at some point you have to assume something. Eric119 (talk) 05:32, 26 November 2007 (UTC)
Some people are more likely to accept 0.333... = 1/3 than 0.999... = 1, perhaps because they were taught that every number has a decimal expansion, and there is obviously no decimal expansion of 1/3 other than 0.333... .
As for why those semi-rigorous proofs exist - this is because anyone to whom the equality is not clear will probably not really understand the more rigorous proofs, so the equality should be demonstrated to him using tools he might be comfortable with even if he is skeptic about the equality.
The real numbers are well defined, and they definitely have the archimedean property - the only question is whether we should mean a real number when we write 0.999... . The answer is yes, since decimal expansions are only meaningful for real numbers. In other words, 0.999... is equal to 1 in the only algebraic structure in which it makes sense to discuss it. -- Meni Rosenfeld (talk) 09:01, 26 November 2007 (UTC)
I think it may be instructive to point out that there are different sets of ordered fields, such as the hyper-reals and the surreals, that do not satisfy the Archimedean property. Of course, these sets are distinct from the real numbers (not to mention really arcane). I guess my point is that if you don't like the Archimedian property, you don't have to assume it, you could work with a different number system. Of course, you lose some nice properties (density of Q for example), but you do get some interesting results. Even in these non-Archimedean fields, it's true that .999... = 1, however. 128.12.166.12 (talk) 06:36, 6 May 2008 (UTC)
I agree - you don't need infinitessimals and Archimedean nonsense to explain this. When I first encountered this topic (age 10 or so), I used the 0.333... approach to understand the truth. For some reason, it is easy for people to understand this: 0.33 < 1/3, 0.333 < 1/3, but 0.333... = 1/3. The "..." is a special notation; it doesn't just mean "write a lot of 3s and then stop". No matter how many '3's you write, the "..." is still needed, standing for the remainder -- the difference between what you've written so far and the real value of 1/3. There is only one, exact real number that can be written in this way -- no other number, even arbitrarily close in value to 1/3 will have this notation. Now, multiply through by 3 and you can see that why 3/3 can be written as "0.999...".
You don't need proofs or axioms to understand this. It's all about the conventions of the chosen notation. The only confusing thing that you have to deal with is this: while "0.999..." is a valid decimal notation for the real number represented by 3/3, there is a much simpler notation for the same number: "1". The confusion doesn't arise in the case of 1/3, because there is no other choice.
Put another way: if you want 0.999... to be a number "just less than 1", then you've got to accept 0.333.. to be just less than 1/3, and now you need to invent a new notation for the actual decimal expansion of 1/3. AndrewBolt (talk) 09:16, 16 June 2008 (UTC)
(Just to be clear - it is worth pointing out that the above argument does still rely on the Archimedean property; Hackenstrings, and the 0.999... ?= 1 FAQ refutes my argument that 'you need to invent a new notation for 1/3'. It claims that with Surreal numbers, the limit of the expansion of '0.999...' is equal to 1-ε, whereas the limit of the expansion of '0.333..' is exactly equal to 1/3.) AndrewBolt (talk) 09:18, 17 June 2008 (UTC)
The linked page is somewhat misleading. It doesn't deal with decimal expansions like 0.999... at all - it deals with a quite different form of expansion, without an explicit integer\fraction separation point, with a mixed unary\binary base and with weights of ±1 rather than 0 through 1. It says that +-(-+) is exactly 1/3, which is very different from saying that 0.(3) is exactly 1/3.
Another small correction: ω is a specific surreal, and I think ε is usually taken to mean 1/ω. The linked page only says that 1 minus +-(+) is some positive infinitesimal; I'm not sure that it is ε. -- Meni Rosenfeld (talk) 09:49, 17 June 2008 (UTC)
The comment about Archimedean axiom on Hackenstrings page is correct. 'Optical' or digit manipulation proofs make indirect use of Archimedean property. Archimedean property implies that remainder of long division is zero. So 1/3 equals 0.333... exactly. Archimedean property is also required for infinite digit manipulation 3*0.333... = 0.999...
Since this is general purpose wikipedia and not a real analysis textbook for undergraduates, consensus seems to prefer lies to children. Explicit use of Archimedean property is probably more confusing than helpful. Tlepp (talk) 12:26, 17 June 2008 (UTC)
Analogy. We are trying to prove that for real numbers a and b, . Proof: . We have used commutativity of multiplication, although it doesn't hold for general rings. Did we lie to anyone? No. Commutativity of multiplication is a feature of the real numbers, and we are free to use it without proof.
Likewise, we are free to use features of decimal expansions of real numbers (such as digits manipulation) in a proof about the real number 0.999..., without showing where they come from. That kind of information belongs to the article Decimal representation, which is waiting for the bold editor who will heavily expand it (thanks, I'll pass). -- Meni Rosenfeld (talk) 12:53, 17 June 2008 (UTC)

Did you lie when you omit commutativity from the proof? No, but you left a gap in the proof. Average reader won't notice it, but a formal proof checker (=computer program) will certainly catch it. See http://us.metamath.org/mpegif/mmset.html

In your analogy the gap in the proof is very small. Almost everybody agrees that commutativity of real numbers is intuitive and obviously true. Same can't be said about archimedean property. Intuition about anything infinite can't be trusted. There is no 0.000...1 and odd things happen at Hilbert hotel. Omission of archimedean property from the proof is significant. Acceptable at high school level, but blatant error at graduate level. Tlepp (talk) 14:14, 17 June 2008 (UTC)

Graduate level? Let's give some more credit to the flexibility of mathematical communication. At some point one ceases to feel compelled to constantly hold the reader's hand on basic elementary analysis. Melchoir (talk) 15:02, 17 June 2008 (UTC)

Does .999... hold a distinct location on the number line, seperate from 1

Does anyone know if .999... holds a distinct location on the number line, seperate from 1, but equal to 1. If no than I would think that would mean that no repeating decimals of any kind hold a distinct location or that there would be oddly placed gaps in the coninuum. If yes, which I believe it does, than I would think that .999... is equal to one only within mathematical calculations, but has an inherently different value than 1. —Preceding unsigned comment added by Southcrossland (talkcontribs) 14:47, 3 December 2007 (UTC)

No, it does not hold a separate location. If it did have a separate point on the number line, that would imply that there was a third point halfway between the points 0.999... and 1. However, since that third point does not exist (at least, I've never seen anybody tell me what it would be), there is no second point, and the two numbers both appear on the same point on the number line, and thus having the same value. --Maelwys 15:02, 3 December 2007 (UTC)
And no, it doesn't mean that "no repeating decimals of any kind hold a distinct location" - just the ones where the repeated digit is 9. Confusing Manifestation(Say hi!) 22:19, 3 December 2007 (UTC)
Just to clarify -- Jpgordon means when the repeated digit is 0, since it's a bit confusing what he's referring to. Gscshoyru 00:52, 4 December 2007 (UTC)
Not to be disruptive, but whats a "third point"?--Sunny910910 (talk|Contributions) 02:08, 4 December 2007 (UTC)
One of the properties of the real numbers (and hence, the number line we're talking about here) is density -- between any two distinct points on the number line, there is a third point. An infinite number of third points, actually. --jpgordon∇∆∇∆ 04:09, 4 December 2007 (UTC)
Just to dispel any possible confusion - "third point" doesn't mean anything. It's just a point that is neither the first point discussed nor the second point discussed (since we start the statement by considering two points). -- Meni Rosenfeld (talk) 09:04, 4 December 2007 (UTC)
  • First, each real number have a distinct position in the number line.
  • Second, each real number have a infinite set of possible representations.

The hole point here is that 0.999... and 1 are two representations of the same real number. So, there is a distinct location for the real number most commonly represented by the symbol "1" in the number line. And you can represent the same number as "0.999...", "1.000...", "1/1", "9/9", "I", "1²", and so on. This do not change the position if this same number in the line. 200.255.9.38 (talk) 12:31, 7 December 2007 (UTC)

A Math Teacher's View on it

I asked an 8th grade math teacher about the whole 1=.999... thing, and when I showed it to her, she simply erased the equal sign and replaced it with '≈', or 1≈.999...

What about this theory? —Preceding unsigned comment added by 24.243.19.189 (talk) 23:15, 4 December 2007 (UTC)

She's wrong. It's not approximately equal. It is equal. They are exactly the same, no approximation needed or involved. Gscshoyru (talk) 23:18, 4 December 2007 (UTC)
Of course, one would hope that any sensible definition of an "approximate equality" relation is reflexive, that is, that anything is approximately equal to itself. So she's not actually wrong. 0.999... is approximately equal to 1; in fact, it is actually equal to it, not just approximately. But that is a less informative statement, in the same way that x > 0 gives less information than x = 3. If she meant 0.999... ≈ 1 AND 0.999... ≠ 1, then she would be genuinely wrong. Maelin (Talk | Contribs) 11:23, 5 December 2007 (UTC)
Writing 1≈.999... in itself might not be wrong, but erasing the equals sign to do so is. That person clearly has no business teaching mathematics, so is there anyone you can contact to get her fired? -- Meni Rosenfeld (talk) 11:28, 5 December 2007 (UTC)
To be fair, I have a lecturer that calls infinity a number, has problems calculating fractions, and frequently abuses notation for no good reason. LegitimateSock (talk) 13:52, 5 December 2007 (UTC)
There is nothing wrong with calling infinity a number - whoever said "number" must be a natural \ whole \ rational \ real \ complex number? Being skilled in techincal calculations is not a prerequisite to being a mathematician, and everyone abuses notation one way or another. -- Meni Rosenfeld (talk) 14:04, 5 December 2007 (UTC)
What you call it isn't important, but treating it as a number is a problem. Consider the cardinality of the intersection of two parallel lines - it should be 0, if you treat infinity as a number, you get 1, that's clearly a contradiction. Of course, if you're studying projective geometry, infinity is treated as a number, it's just a different definition of number. As for calculations - I've yet to meet a mathematician that can do basic arithmetic ;). Abuses of notation are very common, as long as the meaning is clear from context it doesn't matter. --Tango (talk) 14:46, 8 December 2007 (UTC)
Incompetent teachers really annoy me. She should be fired, but I doubt it will ever happen. At least in the UK, being incompetent doesn't seem to be valid grounds for dismissal. --Tango (talk) 14:46, 8 December 2007 (UTC)
Let's dial that talk back a notch. Misunderstanding a technical point regarding the definition of the real numbers doesn't necessarily make you "incompetent" to teach 8th-grade math. She's obviously not a mathematician, but it's entirely possible that she's a very fine middle-school math teacher. Very few mathematicians would want to teach middle-school math, and it's by no means obvious that they'd be any good at it if they did want to. --Trovatore (talk) 21:40, 8 December 2007 (UTC)
If she doesn't know it, she shouldn't talk about it. There's no problem with a teacher saying "I don't know", there is a problem with a teacher stating as fact something which is complete nonsense. --Tango (talk) 00:20, 9 December 2007 (UTC)
She made a mistake. Your posturing about it is extremely unattractive. You could stand to admit when you're wrong,too -- not about the mathematics in this case, but about something arguably more serious. --Trovatore (talk) 01:09, 9 December 2007 (UTC)
Let's not forget that even professional mathematicians make simple and embarrassing mistakes in mathematics. I recall reading that one (maybe more?) mathematician wrote an angry letter to Marilyn vos Savant after the original publication of the Monty Hall Problem. We can all misunderstand things and make mistakes and this is hardly a critical point to eightth grade mathematics. Let he who is without sin cast the first stone, etc. Maelin (Talk | Contribs) 09:50, 9 December 2007 (UTC)
Were it a one off, you would have point, but in my experience there are far too many teachers that teach things they don't understand well enough and thus make far too many mistakes. Making mistakes teaching young children is far worse than making mistakes in a professional paper. The intended audience for professional papers are experts that can spot mistakes for themselves, children just believe what their teachers tell them. --Tango (talk) 15:57, 9 December 2007 (UTC)
Yes, and clearly since this one particular teacher made one mistake, and some teachers makes many mistakes, we should call for this one particular teacher to be fired. —Caesura(t) 19:23, 9 December 2007 (UTC)
Yes. It's not the teacher that matters, it's the pupils. If this teacher has made this mistake, there is a very good chance they will make others and potentially hundreds of children will leave school with serious misunderstandings. The harm done by firing them is one person losing their job, the harm done by letting them continue to spout rubbish is much greater. The only good argument against firing incompetent teachers is the difficulty in replacing them - I know in the UK maths teachers are in short supply. If it is possible to replace this teacher, they should be replaced. If it's not possible to replace them, something needs to be done about finding new teachers, or better educating the ones we have. --Tango (talk) 21:29, 9 December 2007 (UTC)
(outdent) Tango, this is nonsense, and all you're doing is digging yourself a deeper hole. In most cases it will never matter whether a student thinks that zero point nine repeating is exactly one, or just almost exactly one -- in any application, he'll round off anyway. The students do need to get an intuitive understanding of the concepts, but it's really more a quantitative intuition that they need than a topological one. For quantitative applications of the reals to the physical world, the intuition that the reals are "inherently approximate", and that in any given context there's a maximum precision you can usefully extract from them, will do fine.
For the students for whom the misunderstanding will matter -- mostly, the ones who go into mathematics -- they'll have plenty of time to find out the teacher was wrong. One of the lessons everyone has to learn (and for most people it's a far more important notion than the Archimedean property of the reals) is that not everything they taught you in eighth grade was God's own truth.
By the way, your remarks about how if infinity is a number then parallel lines have a nonempty intersection, are frankly just wrong; there is no necessary connection between geometry and the question of what constitutes a number. --Trovatore (talk) 23:02, 9 December 2007 (UTC)
On your first point, it's not important what the mistake is, it's the whole idea that teachers talking complete nonsense is acceptable that annoys me. When I was at school I frequently knew more than my teachers about a given topic and it is extremely frustrating (when they refuse to listen to corrections, at least, which is quite common in my experience). Perhaps I take matters of principle too far, and either way, this isn't the correct forum for such a debate. As far your second point, I disagree. Geometry (in particular, Euclidean Coordinate Geometry) is all to do with n-tuples of numbers, what is and is not a number is, therefore, of utmost importance. The difference between affine and projective geometry is precisely that in one infinity is not considered a number (and, therefore, point) and in the other it is, and the two types of geometry have some very difference properties. --Tango (talk) 00:04, 10 December 2007 (UTC)
It's not acceptable for teachers to present errors as fact, but it acceptable for them to make occasional errors -- hard to avoid that -- and it isn't necessary for eighth-grade teachers to have a perfect understanding of foundations. On the geometry thing, you're flat wrong. Euclidean geometry treated from a coordinate-specific viewpoint (not necessarily the best viewpoint, by the way) is not about "tuples of numbers"; it's about tuples of real numbers. No one is suggesting that infinity should ever be considered a real number. Projective geometry adds a point at infinity, not necessarily a number; the two things are orthogonal.Hmm, I should think the last sentence through more carefully -- I haven't done much projective geometry, really. But the comments before it stand. --Trovatore (talk) 01:54, 10 December 2007 (UTC)
"Number" generally means an element of some subset of the complex numbers (which subset depending on context). If you're including infinity as a number then you are using a non-standard definition of number. My comment about projective geometry works best if you just think in 1 dimension - the projective real line is precisely the regular real line union a point at infinity (that's not how it's usually defined, but it is equivalent upto isomorphism). (In higher dimensions, you end up with lines, etc. at infinity, which is a little more difficult to translate to numbers, the principle is the same, though.) --Tango (talk) 14:00, 10 December 2007 (UTC)
Hence my comment above, that while this is what is usually understood by "number", there is no reason why that must be so. Examples which are in common usage are ordinal numbers and cardinal numbers, and slightly less so hyperreal numbers and surreal numbers. We may also abuse terminology a bit and call ∞ a projectively extended real number. Calling ∞ a number only spells the difference between Euclidean and projective geometry if you assume that this title comes with some strings attached. -- Meni Rosenfeld (talk) 14:25, 10 December 2007 (UTC)
It's not even true in the first place that "this is what is generally understood by 'number'". It's true that most people have experience only with complex numbers at the most, but that does not in any way make it "nonstandard" to call other things numbers. Tango's assertion here is just false. --Trovatore (talk) 16:15, 10 December 2007 (UTC)
To be more explicit (because this is something that comes up from time to time), there is nothing "standardly meant by 'number'". The term "number" is a grab-bag for all sorts of disparate concepts. The question "what is a number?" has no content; it's a pseudoquestion. And the assertion that "infinity is [or is not] a number" likewise has no content. --Trovatore (talk) 16:22, 10 December 2007 (UTC)
I agree with the part where the assertion "x is a number", as well as the question "is x a number?", have no content. But is this really true for the question "what is a number?"? I would say this is a legitimate question, with an equally legitimate, meaningful and contentful answer "There is nothing standardly meant by 'number'". -- Meni Rosenfeld (talk) 16:39, 10 December 2007 (UTC)
OK, I'll give you that one. --Trovatore (talk) 17:50, 10 December 2007 (UTC)
It's not about whether the isolated problem of 0.999... vs. 1 has any importance for a typical pupil (for which the answer is clearly no). It's about the kind of reasoning that could lead to conclusions such as 0.999... ≠ 1 or 0.999... = 1. In my experience responding to opponents of this article, I have noticed that the kind of people who say that 0.999...≠1 tend to have no idea whatsoever what mathematics is, what are the driving forces behind mathematics, how mathematical results are established, and so on (not that I do... I would, however, like to think I understand a little bit better than those). Their methods for arriving at the conclusion 0.999...≠1 are usually flawed from the ground up. Applying such reasoning to their everyday life could have unfavorable results.
Of course, most people don't apply mathematical reasoning to their everyday life, which is a shame, I think. I believe the reason for this is that they don't know what mathematics is or how it can be used, due in no small part to teachers who instill false notions of it into them. The teacher in question might be an example - her belief in 0.999...≠1 suggests flawed understanding of mathematics, which she might pass on to the younger generation, as well as any other mistakes this might cause her to make.
I agree that these are somewhat far-reaching conclusions from a single mistake she has made. Taking into account some of the points raised here, I withdraw my earlier comment that she should be fired based solely on that possibly isolated error. I do maintain my position, though, that this should be looked at. If it is found that this is indicative of a deeper misunderstanding which causes other misleading claims, I still believe she should be looking for another job.
If the situation is so serious that this approach will lead to an extreme shortage of teachers, the only solution is revising the teaching model or the training received by teachers. -- Meni Rosenfeld (talk) 11:38, 10 December 2007 (UTC)
The problem is you're proposing a solution which 1) probably won't work 2) doesn't address the fact that you still have the far bigger problem of actually getting teachers in the first place. Sure you could increase the salary of a teacher to that of a CEO but unfortunately most governments are unwilling to. You seem to think there is some magic solution to the very real problems of the world. There isn't. There's also the fact that being a brillant mathmetician doesn't make you an excellent 8th grade maths teacher, indeed I would go so far to say that many would be crap teachers. Teaching a subject effectively requires much more then a good understanding of the subject. Indeed it would generally be far better to have an excellent teacher with an average understanding of maths then an average teacher with an excellent understanding of maths. A far better test of this teacher would be how he or she responds if a student takes to her the material to demonstrate that 0.999... = 1. Don't get me wrong, there are a lot of crap teachers out them, some have such a poor understanding of a topic that they definitely should not be teaching it. However I'm far from convinced that this single error is any reason to give any cause for concern for an 8th grade maths teacher. Nil Einne (talk) 15:37, 13 December 2007 (UTC)
Funny you should say that. I do think there are magic solutions - not to the real problems, of course, but to the false ones. Most problems only exist because everyone is so accustomed to them that they wouldn't even try to think about a solution.
You are exactly correct - Teaching a subject effectively requires much more then a good understanding of the subject, which means that a good understanding is the very minimum. What on earth is a teacher worth if he doesn't even understand the subject? 8th grade kids are mature enough to learn things more or less on their own if no competent teacher is available.
The OP did mention something about having "showed it to her", and while it isn't clear what is was exactly, it is plausible that he has mentioned ideas from our article. -- Meni Rosenfeld (talk) 21:15, 13 December 2007 (UTC)
A good understanding of what they teach sure. But the teacher did not have to teach what 0.999... = 1. The teacher's understanding is probably sufficient for what he or she is teaching. I don't see any reason why the fact that the teacher doesn't have an exact understanding of what they are teaching matters if they are able to teach what they are supposed to teach properly. Also, I disagree that most 8th grade kids are mature enough to learn things on their own without a teacher. The reality is, having this teacher with a slightly flawed understanding of the topic he or she is teaching, is far, far better then having no teacher or having a professor of maths, who has no idea how to teach 8th grade students for the vast majority of 8th grade students. Nil Einne (talk) 03:43, 26 February 2008 (UTC)

This is such a popular topic, I think I'll add a bit of fuel. Did you know... that many teachers think that 0 is neither even nor odd? See paragraphs 2 and 3 of Evenness of zero#Teachers' knowledge. There are also plenty of anecdotes on the Internet of teachers claiming that 0 isn't even, or disagreeing amongst themselves. Is this worse than confusion over 0.999...? Melchoir (talk) 22:44, 13 December 2007 (UTC)

The only way 0 could not be even is if it was not an integer (since 0/2=0). I would say that a fundamental misunderstanding of integers is worse than a fundamental misunderstanding of real numbers, so yes, the evenness of 0 is more important that 0.999...=1. --Tango (talk) 23:47, 13 December 2007 (UTC)

I agree with Tango. I had teachers teach false information before, or made careless mistakes. Even blatant contradictions in the same sentence, which makes you scratch your head and say WTF? A teacher really should be evaluated before allowed to teach children. 64.236.121.129 (talk) 21:27, 19 December 2007 (UTC)

They are. Melchoir (talk) 23:29, 19 December 2007 (UTC)
You guys talk as if it's a teachers job to instill truth within the brains of their pupils. It's not. Their job is to prepare the brains to later accept truths, or other temporary quasi-truths. Ask a physics teacher whether the sun goes around the earth, or the earth goes in circles around the sun, and *maybe some* will point out that the earth *actually* goes in ellipses about the sun; but none will say that the earth and sun both travel in straight lines through a space-time distorted by mass, even though it's the true answer. Teaching isn't fundamentally about truth, it's about the preparation of minds. Endomorphic (talk) 12:12, 29 December 2007 (UTC)
There is a big difference between maths and science. Science is about theories, they are essentially best guesses based on the evidence was have. Maths is about theorems that have proven beyond all doubt (barring human error, which is very unlikely given the number of experts that check these things). You can approximate science and it still serves a useful purpose, approximate maths and it's just plain wrong. (I should also point out that General Relativity says the Earth travels along a geodesic, not a straight line, a geodesic is the closest you can get to a straight line, but it isn't straight - in the case of the gravity well of a point mass [which the sun's gravitational field is pretty close to] geodesics take the form of conics, in the case of the Earth that conic *is* an ellipse, the teacher in your example is correct, they just haven't gone into full detail.) --Tango (talk) 12:42, 29 December 2007 (UTC)
General relativity says that the geometry of space is not Euclidean. "Straight line" is a term which originates from Euclidean geometry, but if I am not mistaken, is used as a synonum for geodesic in other geometries.
Now, where did you (Endomorphic) get the idea that it is not the job of teachers to instill truth within pupils? Why do pupils spend a dozen or so years in school? To learn lies? To learn nothing at all? No, teachers should give their pupils knowledge, and this knowledge had better be truthful, even if only approximately. "Preparation to later accept truth", or the ability to process information and apply critical thinking to it, is just one of the many important things the young minds must learn. -- Meni Rosenfeld (talk) 00:22, 30 December 2007 (UTC)
@Tango: Teaching math and science both require a process of mental refinement. In the case of science, it's natural to progress through the various models proposed, they typically increase in difficulty and complexity. In mathematics, one must foremost be taught to abstract. Teaching numbers to kids, you might start with "here are 3 oranges, here are 3 apples, see how there are 3 of each?" to abstract the number from the objects, and label this new number "3". Later, faced with 2.999... and 3, one must again abstract away from these labels. You don't start with the most complete truthful version first because the brain isn't ready. "You have {{}, {{}}, {{},{{}}}} apples" ? I don't think that'd work. Even teaching differential geometry (the last example minus the planets), you're not going to explain a curvature tensor to someone who's not heard of conic sections.
@Meni: A sports team might happen to score points or not conceed penalties; these all help, but they aren't fundamentally what the team is trying to do. It just wants to win. Similarly, pupils might gain some knowledge along the way but the main goal at school is to learn to think.
I'm saying that sometimes teachers lie by omission or by generalisation, to which you've both agreed. It's not a bad thing. This doesn't cover or excuse teacher error; just that teachers shouldn't be expected to be university level experts on every known topic. That's why learning to think is more important than remembering the facts your teacher offers. Endomorphic (talk) 12:33, 31 December 2007 (UTC)
Do you really think that "You have {{}, {{}}, {{},{{}}}} apples" is "the most complete truthful version" while "You have 3 apples" somehow isn't? Using standard set theory definitions they say exactly the same thing. There is no need to be extremely technical to be truthful. For instance, "The Earth goes around the Sun." is a true statement. There is no need to mention the shape of the path or relativity to be truthful. There is also no need to be extremely accurate to be truthful. You could say there are a million people living in a city and not be lying even if there aren't exactly that many people living in the city. Eric119 (talk) 18:32, 31 December 2007 (UTC)
Exactly. You can teach maths correctly without going into full rigorous detail. Saying something which if provably false is very different to saying something that is true but not giving a rigorous proof of it. Also, the concept of natural numbers as sets of all smaller natural numbers isn't really a definition, it's a construction. A natural number is anything satisfying the axioms of a natural number (for example, Peano's), the whole business with the sets is just to prove that there is something satisfying those axioms. "0, 1, 2..." satisfy the axioms just as well as "{}, {{}}, {{},{{}}}...", they just do so by definition rather than by construction, so aren't a proof. --Tango (talk) 18:51, 31 December 2007 (UTC)
Yes and the point here is that this teacher was drawn into teaching on something he or she doesn't normally teach. Yes, perhaps it was a mistake for the teacher to answer the question when he or she didn't know, but it's obvious that the teacher though his or her answer was correct. And there is no clear evidence the teacher was pigheaded about it or refused to accept he or she was wrong when presented with the arguments. The reality is, this teacher is probably a fine teacher when it comes to teaching what he or she is supposed to teach which is maths but not to the level where 0.999 = 1 matters. I agree with Endomorphic here, ultimately what school is about is learning how to think, how to learn, how to function. How many of us really remember half of what we learnt in school? I probably don't and I'm still in my 20s. Does this mean my time at school was a complete waste of time? No because the time was just as much about what I've already mentioned. The fact that many of the details may be wrong or inaccurate therefore is not the most important thing in the world. And if we want to learn what we've forgotten, we should find it a lot easier because of what we've already learnt, even if some of it wasn't completely accurate. This is not to suggest that schools should teach inaccuracies, but that given the nature of schools it is inevitable and doesn't cause great harm provided school properly prepares the student for life. Nil Einne (talk) 03:56, 26 February 2008 (UTC)

Flawed arguments

Lol if anyone is interested in showing the holes in more flawed arguments, check out User:ConMan/Proof that 0.999... does not equal 1 Nil Einne (talk) 16:19, 13 December 2007 (UTC)

They call the page "Proof" but I don't see any proofs! What a letdown.Gustave the Steel (talk) 15:56, 17 December 2007 (UTC)
On the contrary, it would be very worrisome if there were indeed any proofs, since that would mean that ZFC is inconsistent and every mathematical result ever discovered is worthless. But this page was only created to let opponents release some steam, it's their fault for not coming up with anything clever. -- Meni Rosenfeld (talk) 15:59, 17 December 2007 (UTC)
Yeah, I created that just before this subpage was made, so I have to assume that the people who found it did so by reading through the archives of Talk:0.999... (and given how incoherent some of those posts are, that's quite a task). If you look at the talk page, you can see some people having it out with one of our old friends who dropped by for a while (but not for long, if you check their own talk page). Confusing Manifestation(Say hi!) 02:41, 3 January 2008 (UTC)

Paradox?

I am currently studying Calculus 2 and Physics 3 (both Advanced Placement)at my high school. While I have no experience with real analysis, I like to think that I understand logic fairly well. My experience with "simple" logic is that it does not always apply, and can often create paradoxes, i.e., relativity. Thus, I don't want to be sucked into the automatic assumption that this is not true. Simply the reasoning that thousands of people smarter than me declared this to be true, therefore it is unlikely that I would prove them wrong. However, I would like to make a point. Consider an infinitely small particle. AS far as anyone can observe, it does not exist. It can not be measured by modern (human) means, however, we "know" it exists. As zero represents non-existence, there is a difference between the two. While numerically this particle may equal zero, to say that it equaled zero would be to deny its existence, which would contradict logic (since we "know" it exists). Therefore, since the particle exists, a single arbitrary unit, "1" minus this particle is different than the original unit. However, if we assume that "1" is made up of an infinite number of particles, then in "particle units" this is equivalent to [infinity] - 1, which by the conventional standards of math, is equal to infinity, which would prove that this particle is zero, thus proving that it does not exist. Paradox?--Vox Rationis (Talk | contribs) 04:40, 16 January 2008 (UTC)

  • No. Numbers aren't things. Things might indeed have a smallest possible size. The real numbers are theoretical constructs, and have no smallest (or largest) possible size. Oh, and, what would an "infinitely small particle" be? How can we "know" it exists? --jpgordon∇∆∇∆ 06:59, 16 January 2008 (UTC)
  • Try not to think of a point-particle as being infinitesimal, rather think of it as not having a size at all. It has a position, but no size. Therefore, "1 metre minus the size of a point-particle" isn't a meaningful statement and the problem you describe disappears. That said, you shouldn't use physics to try and understand maths, maths is abstract and is built around pure logic, it doesn't need to fit the real world. As long as maths is self-consistent, it is correct, regardless of whether it fits what we see around us. --Tango (talk) 14:55, 16 January 2008 (UTC)
  • I'm not aware that (Einstein's) relativity actually causes any paradoxes. Something that seems strange and counterintuitive is not necessarily a paradox. Nor do I believe that "infinitely small particles" exist - all particles have some size, although defining what 'size' means at this scale can become strange. "maths is abstract and is built around pure logic, it doesn't need to fit the real world." I disagree to some extent. While math can extend beyond what is possible in the real world, (eg. infinity) it becomes rather useless if you develop a math that directly contradicts the real world. Algr (talk) 18:55, 21 February 2008 (UTC)
Modular arithmetic directly contradicts the real world - I put two beans in a pot, then put another two in, I don't end up with no beans in my pot, however that's exactly what arithmetic modulo 4 tells me. Modular arithmetic is extremely useful, however. No maths directly relates to the real world. You have to model the real world somehow, and then you can use maths to describe that model. We model beans using regular integers, we don't model them using integers modulo 4, since that doesn't work. If your maths ends up contradicting the real world, it's not a problem with the maths, it's a problem with your model. --Tango (talk) 19:06, 21 February 2008 (UTC)
Modular arithmetic does not contradict the real world. When you use it, you are describing a certain real world situation. In your Mod4 example, perhaps the pot is on a spring, and the weight of four beans is just enough to make the pot flip over and dump all the beans out. Or perhaps their is a dial where four turns brings you back to your starting point. If your real world situation isn't like this, then you just made a math error by throwing in Mod4 for no reason. Algr (talk) 19:30, 21 February 2008 (UTC)
No, not a maths error, a modelling error. Modelling is part of science, not maths. --Tango (talk) 21:32, 21 February 2008 (UTC)
So the process of describing a real world situation mathematically isn't considered "math", but "science"? That's curious. Well, I suppose you have to draw the line somewhere. Algr (talk) 07:01, 23 February 2008 (UTC)
Well, it's the bridge between science and maths, exactly where you draw the border is pretty arbitrary, but I would call it science, since it's based on observation, not logic. --Tango (talk) 13:47, 23 February 2008 (UTC)

proof 0.999 is equal to one

Eq. 1: Assume 0.9...≠1.

take square root of both sides.

Eq. 2: 1-0.9...=some non-zero X because of eq.1.

Multiply Eq.1 by 0.9...

Eq.3: (0.9...)^2≠0.9...

Here comes the dirty trick: 0.9... is infinitely long, and infinitely close to one (though not one by our assumption), so 0.9...=(0.9...)^2. So we have an inconsistent triad. 2 and 3 depend on 1, so saying any of them are false leads to saying 1 is false. One of them must be false by definition of an inconsistent triad, so, so 1 must be false.

So 0.9...=1.

68.122.147.216 (talk) 20:52, 9 March 2008 (UTC)


I'm guessing that you are assuming that 0.9...=(0.9...)^2 proves that .9..=1? But infinity = infinity ^2. This doesn't prove that infinity = 1. .999 is a form of infinity, and so shares some of it's properties. Algr (talk) 16:50, 10 March 2008 (UTC)
Huh? There are exactly two real numbers satisfying , 0 and 1. The proof is trivial. The "0.999... shares some of infinity's properties" bit is complete nonsense. -- Meni Rosenfeld (talk) 19:29, 10 March 2008 (UTC)
I don't see how "0.9... is infinitely long, and infinitely close to one (though not one by our assumption), so 0.9...=(0.9...)^2" works. You've assumed there's one non-zero infinitesimal in the reals, so why can't there be two? If then by the binomial theorem, so --Tango (talk) 20:13, 10 March 2008 (UTC)

Could .999... get plutoed one day? (from Talk)

There are two kinds of truth: Declaratory, and Observational.

Some Declaratory Truths:

The people shall pay taxes to the king
Pot is illegal
Pluto is not a planet
Sherlock Holmes lives in London.

These are true because someone said so. If I become king of the world, I can change any of these.

Some Observational truths:

The sun rises in that direction. (which I shall call East)
Unsupported objects fall to the ground
Living things need water.

Observational truths have nothing to do with humans. Humans can discover them, but they would remain true even if humans never existed. What kind of truth is .999...=1? I have always thought that all math was observational. No one invented Pi, or passed a law establishing the distributive property. But you are saying that "if mathematicians say 0.999...=1, then that's the way it is"! That is clearly declaratory - which means that any mathematical "proof" you may come up with is simply an excuse to justify what mathematicians have already decided. It would seem that all I need to do is build up enough political power and I could alter mathematical reality! Given the number of people that you admit reject .999...=1, the equation might actually get "plutoed"[[1]] one day. I find this very disturbing - it flatly contradicts what I've always understood math to be. Algr (talk) 03:32, 21 March 2008 (UTC)

It's an observational truth, in that in follows logically from the definitions and axioms. I don't think anyone is saying "if mathematicians say 0.999...=1, then that's the way it is" I am not aware of any mathematicians who reject this. Just because you can conceivably get political power to declare that 1+1=4 or Pi equals 3, doesn't change what is true. Also see Talk:0.999.../Arguments (where this discussion might best be held?) Mdwh (talk) 03:45, 21 March 2008 (UTC)
That is Tango's exact quote about eight paragraphs above here. It seems that you Mdwh and Jpgordon do not agree either. I hope you can understand why I am as frustrated with this discussion as others are.Algr (talk) 05:01, 21 March 2008 (UTC)
I think he is talking about the definitions and axioms, which I mention. For example, one could define "+" to mean "-", so 1 + 1 = 0. But that doesn't mean that you've changed the "observational truth" that 1 + 1 = 2, you've just redefined the symbols to mean something different. To take one of your examples, it would be like if someone redefined East to mean West - does this mean "The sun rises in the East" is no longer an observational truth? You can choose a different set of axioms or definitions just like you can call the "direction" something other than East, but this doesn't mean you can arbitrarily decide mathematical truths to be something else, just like you can't declare the sun to be rising in a different direction. Does that make sense? I guess we give different answers because it's a bit of both - the definitions/axioms are declatory, but what follows is observational.
Could the definitions be "Plutoed" then? Well, the definition of a planet was always a bit arbitrary, whilst the real number system is consistent and useful. Plus any new number systems can just be called something else, so there's no reason to redefine the reals. There are some definitions in mathematics which aren't agreed upon (e.g., should the natural numbers include 0), but it seems that mathematicians there are happy to live with both definitions, rather than needing one to be declared the "right" definition. Mdwh (talk) 05:12, 21 March 2008 (UTC)
I, too, think you're missing Tango's meaning. Perhaps if I fill in the gaps. If mathematicians say 0.999...=1, then they are telling you that they have a proof of this statement, so it is at least as true as any logical statement can possibly be true. You can tell right away when a mathematician is telling you something that is not certain, because it has a big bold Conjecture 3.1 next to it. Absent such a marking, that's the way it is.
For this Wikipedia article, the issue is still more clear, because we have several proofs right there in the article. The reader doesn't even have to take it on faith that proofs exist, because behold, there they are. Consider an analogy. You're a kid, and you're awoken by your mother, who tells you that it's snowing and you don't have to go to school. Here is the correct reaction:
  • Hooray! It's snowing! I don't have to go to school. Clearly my mom has looked outside and seen snow, and she is passing the news to me. In fact, I'm so thrilled by the news that I want to see it myself. Oh look! It really is snowing! No school!
Now here are several incorrect reactions:
  • Who does that lady think she is? Does she think that simply by declaring "it's snowing", those words magically cause snow to appear? Does she pretend to hold absolute authority over the weather? Well I for one am too sophisticated to believe such nonsense.
  • This statement is absurdly vague! Is it snowing in the Sahara? In deep space? She didn't say what she meant. Perhaps I should spend the next hour bugging her with questions about exactly what she meant. It's her fault for being imprecise in the first place.
  • This news paralyzes me with fear and doubt! Perhaps it is snowing, but what if someday "snow" means rain and "rain" means snow? Then clearly it will have been raining instead of snowing, and rain doesn't get you out of school. I'd better pack my bag.
And here, beyond merely incorrect, is your reaction:
  • Hello, Channel 4 news? ... Yes, I have lost all respect for your reporting and for meteorology in general. You reported that it's snowing, but you failed to take into account the possibility that it's not snowing. ... mmmhmm ... yes ... Yes, of course I've watched your live coverage, that's what I'm complaining about. ... Yes, I see the snow on the TV. ... Yes, I'm at the window, and I see the snow outside ... right. ... Yes. ... Well I concede there's snow, but that hardly proves it's snowing per se, does it? By willfully interpreting the snow to mean that it's snowing, you are relying upon circular logic, and hence your coverage is mistakenly biased against alternate viewpoints. For example, my pet snake didn't believe a word of your show, and do you have any idea how many snakes there are in the greater metropolitan area? .... Hello? Hello?
Melchoir (talk) 07:59, 21 March 2008 (UTC)
What you say is true, but it's not quite what I was saying. Mdwh's interpretation was closer. The declaratory truth is that "0.999..." means what it does, that "1" means what it does and that "=" means what it does. Those truths are declared by mathematicians and that's the way it is. There is then an observational truth that those definitions imply that 0.999...=1. That observational truth is indisputable since we have various proofs of it, the only part of the statement you can disagree with without just being plain wrong is the definitions, so that's the part I was assuming was being disputed when I said that "mathematicians say 0.999...=1, so that's that". --Tango (talk) 15:41, 21 March 2008 (UTC)
Ah, pardon me! I agree with what you're saying, but I think for rhetorical purposes we hold up the possibility of disputing the definitions too much on this talk page. As far as I can tell, the most common misconceptions around 0.999... are substantive, and indeed just plain wrong. So we have to be careful not to assume that the ideas being disputed are actually the ideas for which dispute makes sense. (This is just a general principle, of course, which may not apply to your exchange with Algr.) Melchoir (talk) 17:47, 21 March 2008 (UTC)
You're absolutely right. I was overly simplifying my point when I originally made it, which I think has caused some confusion. I think we do have to accept that most of the people disputing it are just failing to understand the concepts rather than actually disputing the definitions. --Tango (talk) 18:17, 21 March 2008 (UTC)

Mdwh, the reason I wrote "The sun rises in that direction. " is because I anticipated your tactic about redefining the word "east". But you went through the whole thing anyway. This suggests that you don't understand the question, and are simply substituting a different question that you DO know the answer to. The actual article is guilty of this too. It brings out "unique decimal expansion" 4 times before it even gets to the Proofs section, yet ignores serious practical problems like how to define exclusive ranges. Algr (talk) 20:55, 21 March 2008 (UTC)

I don't understand. What are you saying about unique decimal expansions? And what is an exclusive range? --Tango (talk) 21:17, 21 March 2008 (UTC)


Just not convinced

Maybe, when DOING mathematical problems, .999... is "equal to" 1. However, in the real world, .999... and 1 are different. Period. The world's greatest mathematicians can write a treatise on it and I still would disagree. Common sense shows me that .999... and 1 are not equal. Without "proving" they are equal using equations, there is no proof. 208.255.229.66 (talk) 16:59, 25 March 2008 (UTC)

In the "real world", 0.5 and 1/2 are different, because they 'obviously' look different, and aren't even written using the same numbers. But math tells us that they're the same. The only reason that you'd consider the two problems to be different is because 1/2 = 0.5 has been ingrained into you since grade school, to the point that you naturally consider them the same without even thinking about the math basis for that assumption. On the other hand, 0.999... = 1 seems counter-intuitive, so you naturally consider them different without even thinking about the math basis for that assumption. But you've got to look past your assumptions and see the math to properly understand. --Maelwys (talk) 18:06, 25 March 2008 (UTC)
I'm curious: In what context did you encounter 0.999... in the real world? Besides, there are proofs using equations: Have a look at the article's "digit manipulation" or "infinite series and sequences" proofs. --Huon (talk) 18:20, 25 March 2008 (UTC)
In other words, you have no reasons to support what you say, yet you refuse to be persuaded otherwise, no matter what anybody says. Eric119 (talk) 19:34, 25 March 2008 (UTC)
No. It is common sense. .999... is not 1. They are two different numbers. Functionally, .999... is 1. Logically and semantically, it is not. 208.255.229.66 (talk) 19:37, 25 March 2008 (UTC)
Mathematical equations do not have qualifiers like "functionally", "logically" and "semantically" attached to them. Eric119 (talk) 19:43, 25 March 2008 (UTC)
.999... is not an equation. The article is trying to prove that two different numbers are one in the same. That just doesn't fly. 208.255.229.66 (talk) 19:45, 25 March 2008 (UTC)

It comes down to the fact that .999... will always be just short of 1. It is certainly not 1. It's VERY close, but not close enough. Also, I am not being a troll; I truly know that they are not the same number. 208.255.229.66 (talk) 19:28, 25 March 2008 (UTC)

Notice how you say ".999... will always"? That's the fallacy of your thinking. You don't see .9~ as a number, but as some process. To you, .9~ "will never" equal one; it "keeps getting closer" to 1 but "is always" just short.
9 will always be equal to 9. .999... will never exactly equal 1. I am seeing this as a number. 208.255.229.66 (talk) 20:06, 25 March 2008 (UTC)
The truth is, .9~ doesn't do stuff. It's not a process. You can't describe it as taking action. .9~ doesn't get closer to 1 for the same reason that .5 doesn't get closer to 1/2. .9~ simply is itself, at all times - and the proofs show that it has the same value as 1. Gustave the Steel (talk) 19:55, 25 March 2008 (UTC)
It doesn't matter if .999... "does stuff" or not. .999... continually approaches, yet never reaches, 1. The "proofs" do not persuade me. 208.255.229.66 (talk) 20:06, 25 March 2008 (UTC)
Consider this example: a computer starts with the value of 9/10, then adds 9/100, 9/1000, and so on. It will stop only when the result of such an addition is 1. When will it stop ? Never! No matter how long you run this process, it will never reach one. IT WILL NOT REACH .9~, EITHER.
The problem is, you think .9~ is actually the process itself, instead of the number it will never reach. In this, you are clearly wrong. Gustave the Steel (talk) 20:13, 25 March 2008 (UTC)
Here's the problem with your example: this is in terms of an equation. If you have .999... as a number, and 1 as a number, and they just EXIST, then .999... is not 1. 208.255.229.66 (talk) 20:37, 25 March 2008 (UTC)
Actually, there isn't a problem with my example. The fact that you talk about .9~ "getting closer" to something means that you aren't thinking of it as a number. Numbers don't get close to things. .9~ is precisely the same as 1, as the many proofs show; you refuse to accept them because you think .9~ is a process instead of a number. Gustave the Steel (talk) 03:28, 26 March 2008 (UTC)
If you are not convinced by proofs, you clearly have little understanding of what mathematics is. Of course, you are free to point out possible flaws in a proof, in which case it needs to be corrected, but rejecting a proof just because you don't like its conclusion is laughable. -- Meni Rosenfeld (talk) 20:30, 25 March 2008 (UTC)
I reject a proof not because I don't like it; but because it is incorrect. It is laughable that you seem to be getting offended and upset by my thought processes. 208.255.229.66 (talk) 20:37, 25 March 2008 (UTC)
With this I agree. I shouldn't be getting upset. -- Meni Rosenfeld (talk) 21:25, 25 March 2008 (UTC)
If the proofs are incorrect, can you point out where the mistakes are? Take for example the digit manipulation proof. For the proof to be wrong there'd have to be one line not following from the one preceding it. Which is the conclusion you doubt? -- Huon (talk) 23:48, 25 March 2008 (UTC)
[ec] We believe you that you truly think that they are not the same number. Alas, you are wrong, just as wrong as you would be if you claimed that 2+2 and 4 are not the same number. You can either take our word for it, or study the subject and reach the conclusion on your own. -- Meni Rosenfeld (talk) 20:01, 25 March 2008 (UTC)
Why take your word for it? I see two different numbers. Very simple. Taken as seperate entities, doing nothing with them, just as they are, .999... and 1 are not the same. 208.255.229.66 (talk) 20:06, 25 March 2008 (UTC)
Just as 2+2 is obviously different from 4, right?
Why do you think 0.999... even means anything? Because they taught you at school that numbers have decimal expansions and that decimal expansions, even infinite ones, represent numbers? So? Was your school blessed by the holy spirits of maths? If a teacher tells you something, does that make it automatically correct? Your teachers weren't mathematicians, and even if they were, they probably "dumbed down" the true ideas so it would be suitable for children. So unless you know a foundation for real numbers, you have no reason to believe 0.999... means anything, let alone a number which is less than 1.
Why do you think .999... equals 1? Because of mathematical indoctrination. I know a foundation for real numbers. I also know that .999... is not 1. 208.255.229.66 (talk) 20:37, 25 March 2008 (UTC)
No, because of mathematical proof.
Okay, let's hear it. What foundation for real numbers do you know? -- Meni Rosenfeld (talk) 21:25, 25 March 2008 (UTC)
Once you actually approach this mathematically - with axioms, definitions, theorems, the whole deal - you will be able to define real numbers and decimal expansions, prove some things about them, and conclude that 0.999... = 1.
Don't take my word for it. Pick up a book about real analysis and study this. Until that time, please don't argue with people who know what they are talking about. -- Meni Rosenfeld (talk) 20:15, 25 March 2008 (UTC)
It is very simple. .999... and 1 are different. Just as you and I are different. 208.255.229.66 (talk) 20:37, 25 March 2008 (UTC)

Trolling and sincerity

Do we have any good evidence that there are actually people who seriously doubt 0.999...=1 and are not just being contrary and provocative to have fun, i.e. trolling? --Macrakis (talk) 18:36, 25 March 2008 (UTC)

I actually doubt that people are trolling here just for fun. The funny thing about the .9~=1 argument is that the people who disagree are almost religious about their viewpoint. Where else in mathematics would you see someone say "Nothing you say will convince me" or "I don't care about the proofs, I'm right anyway"? Gustave the Steel (talk) 20:02, 25 March 2008 (UTC)
Having participated in discussions relating to this article for quite some time, I can safely say that while there are many opponents who are definitely trolls, there are also some who are definitely not. I recall once enjoying the arguments of an opponent because they were actually reasonable. Well, to be fair, he wasn't protesting the equality of the real number 0.999... with 1, but rather the notion that the symbol "0.999..." should be automatically taken to mean the real number. I also think I remember a few skeptics who flipped after a bit of explanation. -- Meni Rosenfeld (talk) 20:07, 25 March 2008 (UTC)
If I use .999... in a mathematical problem, yes, I can take .999... as being equivalent to 1 for the purposes of said mathematical problem. Philosophically speaking, and being as technical as possible, and having an open mind, and being mathematically adept, .999... is not 1. 208.255.229.66 (talk) 20:11, 25 March 2008 (UTC)
You just won't take "no" for an answer, will you? "No" means "no". It means you are wrong. It means you don't know what you are saying, and are annoying people who do.
And don't take Anome's answer as meaning you are somehow right. If you said that 0.999... and 1 are different representations of a number, you would be right. But that's not what you are saying. -- Meni Rosenfeld (talk) 20:23, 25 March 2008 (UTC)
I agree with Gustave. There is something about .999... = 1 which is "obviously wrong" according to many people's common-sense intuition about numbers. Given that, It's very difficult to convince people of something that they consider to be obvious nonsense, particularly if the argument you are trying to use is itself subtle and sophisticated. From a psychological viewpoint, it provides a fascinating insight into people's intuitions about numbers, particularly the ideas that the reals are in one-to-one correspondence with their digit strings (which they do not seem to have any problem with being infinite), and their equally strong conviction that nonzero infinitesimals must exist among the real numbers. Unlearning some of your childhood mathematical intuitions is an essential part of learning the deeper concepts that underlie and supersede them. -- The Anome (talk) 20:14, 25 March 2008 (UTC)
To 208.255.229.66: you're right that the representations 0.999... and 1 are not the same thing, in the same way that the representations 0.5 and 1/2 are not the same. The assertion "0.999... = 1" is merely the assertion that (just as with the case of 0.5 and 1/2) they represent the same number, which is identical to saying that (in your words) [.999... is] "equivalent to 1 for the purposes of said mathematical problem." -- The Anome (talk) 20:17, 25 March 2008 (UTC)
...or, to echo Meni's comments from above, "2+2" and "4" are not the same either, considered as representations; but they also have the property of representing the same number: that is to say, that 2+2 = 4 for the purposes of mathematical problems. It's just that you internalized this idea early enough in life to regard it as self-evident. It's actually quite difficult to put a precise meaning on "2", "4", or indeed "+" and "=", once you start thinking about it in more detail and stop relying on intuitions of self-evident obviousness; and this is where you step away from childhood arithmetic, and on to the path to sophisticated mathematics. Fortunately, many people have been working on this problem for hundreds of years -- not least on the problem of making convincing arguments about why any of it makes sense -- and some of them are even here to help you. -- The Anome (talk) 20:37, 25 March 2008 (UTC)
I can agree 2+2 and 4 are the same thing. I can also agree .5 and 1/2 are the same thing. They are different in how they are written, but logically, they are the same number. I cannot agree that .999... and 1 are the same. Whether representations, absolute values, or any other idea, they are different. As far as smart people being here to “help me”, I am also here to help the “smart people” see my viewpoint. How can someone as intelligent as most of you here tell me that .999… and 1 are the same? They simply aren’t. If light traveled at 1 mile per second, and someone found a way to slow it down to .999… miles per second, the light traveling at 1 mile per second would get ahead of the light traveling at .999… miles per second. .999… is less than 1. 208.255.229.66 (talk) 20:48, 25 March 2008 (UTC)
Interesting circular example. Your example to show that 0.999...≠1 relies on the assertion that 0.999≠1. Oli Filth(talk) 20:59, 25 March 2008 (UTC)
And .999... IS not equal to 1. See below. 208.255.229.66 (talk) 21:06, 25 March 2008 (UTC)
Actually, according to the proofs from the article, something travelling at 1 mps is already travelling at .9~ mps. Slowing it down would take it below .9~ mps.
Do you want to prove us all wrong? Pick apart one of the proofs from the article! Assertions don't convince anyone about anything in the field of mathematics. Proofs are necessary to make your point, and so far you have none. Gustave the Steel (talk) 03:43, 26 March 2008 (UTC)
The nice thing about mathematicians is that they will take you seriously. (By the way, I'm not one of the "smart people" I was talking about -- I'm merely an applied math user with a passing familiarity with the field, and don't consider myself a mathematician) At the risk of rehashing the arguments in the FAQ, if they're different, can you tell me the value of 1 - 0.999… is, if not 0? -- The Anome (talk) 20:55, 25 March 2008 (UTC)
I was responding when this page was moved; the gist of my response was that no, I cannot tell you the value of 1 - .999... . Because .999... is an abstract value and not absolute, it excludes itself from being used in a mathematical equation, unless it is rounded UP to 1. Because it MUST be rounded UP, .999... is less than 1. 208.255.229.66 (talk) 21:06, 25 March 2008 (UTC)
And your proof/evidence of this is? Oli Filth(talk) 21:11, 25 March 2008 (UTC)
If you want to multiply, for example, .999... by 17, you would not be able to do the calculation because the calculation would never be finished. To complete the calculation, you would have to multiply by the closest real number, which is 1. 1 is rounded up from .999... . 208.255.229.66 (talk) 21:13, 25 March 2008 (UTC)
Your argument holds equally to, for example, multiplying (1/7) by 28. (1/7) in decimal notation is 0.142857142857..., so by your argument, "the calculation would never be finished". But of course it can be completed. In the case of 0.999... * 17, replace (1/7) with (1/1), and 28 with 17. Oli Filth(talk) 21:20, 25 March 2008 (UTC)
Sorry about the delay: my TeX is rusty. All I am saying is this: if we play by the standard ground rules of mathematics, all that "0.999... = 1" means is that
and nothing more. Now, we can easily prove this to be true, within these standard ground rules, based on the elementary definition of limits. Now, this captures everything I mean by "0.999... = 1". I'm not making any other claims, philosophical or otherwise, and that's the only claim I'm making: this is what I understand "0.999... = 1" to mean. Now, I can demonstrate the validity of the above, by playing with symbols within the standard ground rules of mathematics. The proof is simple and mechanical. Given this, I'm afraid to say that if you don't believe this, you are either using a different definition of "0.999..." or "=" from me, or you are saying that the standard mathematical ground rules don't apply. -- The Anome (talk) 21:23, 25 March 2008 (UTC)
I agree with The Anome's take on it - "0.999..." is a symbol (or a collection of symbols, if you like). By itself it has no meaning. You can give it any meaning you like. But, as soon as you give it a meaning that is consistent with other decimal representations, you find that it behaves mathematically exactly like 1, at which point the mathematicians say that it is equal to 1. There are a lot of assumptions between "I have this symbol 0.999..." and "0.999... = 1", and throwing out any of those assumptions will break the second, but the point is that they're the same assumptions that make a whole lot of other things work, so you can't break the equality without breaking something else - an axiom or topological or field property of the real numbers, or the meaning of decimal expansions, or something. For the sake of example, let's "break" 1 - 0.999... = 0. In order to do so in a meaningful way, we're going to have to "break" one or more of the following (or possibly something I've missed out), shown hierarchically (i.e. to break something in the list, you'll need to break something else listed below it):
  • The fact that 0.999... is a real number, equal to 0.9 + 0.09 + 0.009 + ...
  • The equivalence between real numbers and decimal expansions.
  • The definition of an infinite decimal representation as an infinite series.
  • The ability to subtract 0.999... from 1, and get a real number as a result.
  • The definition of subtraction as the addition of the additive inverse.
  • The existence of additive inverses in the reals.
  • Additive closure of the reals.
  • The ability to move things around on the left-hand side. (I can be more specific if you'd like.)
  • Absolute convergence of (0.9, 0.99, 0.999, 0.9999, ...)
  • Absolute convergence.
  • Topological closure of the reals.
  • The ability to manipulate absolutely convergent sums.
  • The ability to put a bound on 0.999...
  • Again, absolute convergence of the partial sums.
  • Comparison between 0.999... and the partial sums.
  • Boundedness of real numbers.
  • Equality of the left-hand side with zero.
  • Equality of a number with zero.
  • Equality of two real numbers.
  • Zero.
  • Bringing the left-hand side to the possibility of equalling either an infintesimal, or zero.
  • The non-existence of infintesimals in the reals.
So, what do you want to break? From where I stand, a construction of the reals (as topological closure of the rationals, which is the quotient space of the integers, which is formed by attaching additive inverses to the natural numbers, which are constructed from the Peano axioms) and a definition of decimal representations (as ordered combinations of elements of {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}) and a relationship between the two should be enough to prove all of the above. Confusing Manifestation(Say hi!) 01:31, 26 March 2008 (UTC)

In between 0 and 1, there are infinite numbers. Any number in between 0 and 1 is not going to be equal to 0 or 1, no matter how close. Functionally, mathematically, and formulaically, it acts and functions as 1. But .999... is not equal to 1. 208.255.229.66 (talk) 15:23, 26 March 2008 (UTC)

While that's true, it's completely circular. 0.999... isn't between 0 and 1, it is 1. There are proofs in the article that 0.999...=1. The only way you can dispute that fact is by finding a flaw in each of those proofs (but I'll settle for a flaw in just one of them for now). You seem to be using a very strange definition of equality... --Tango (talk) 15:29, 26 March 2008 (UTC)

Only the OTHER side can have Trolls?

I've had to endure a number of very rude and insulting comments from people who insist that .999...= 1. Consider this:

"You just won't take "no" for an answer, will you? "No" means "no". It means you are wrong. It means you don't know what you are saying, and are annoying people who do. And don't take Anome's answer as meaning you are somehow right."

If you walked into a conversation at this point, would you expect the other side to respond positively to this? Algr (talk) 03:39, 27 March 2008 (UTC)

Comments like that are the result of a deteriorating discussion. When the inequality believer insists without basis that he/she is correct in spite of the proofs, evidence, and logic backing the other side, the equality believer is going to get frustrated and start responding in kind. That doesn't mean you should blame the equality side exclusively for using assertions and insults; the blame should go both ways. Gustave the Steel (talk) 04:03, 27 March 2008 (UTC)
I never once said I am correct and others aren't; I also have not been the slightest bit rude. I am simply stating that, after reading the article, I personally am not convinced that .999... is equal to 1. 208.255.229.66 (talk) 14:06, 28 March 2008 (UTC)
You did assert, frequently without mathematical support, that you were correct and others aren't in almost each of your posts under "Still not convinced." Gustave the Steel (talk) 05:13, 31 March 2008 (UTC)
Thing is, mathematicians have to be right about mathematics, and math is all about proofs, and a correct proof is sufficient to say This Is So about the question under discussion. We (I get to sometimes call myself a mathematician, having a degree in it) have to work particularly hard not to be snarky when confronted with people who obdurately refuse to accept mathematical proofs of mathematical things. --jpgordon∇∆∇∆ 04:13, 27 March 2008 (UTC)
I think there is a problem here, and it isn't really anyone's fault, nor is it really specific to the article 0.999.... Someone has a problem with an article, but they don't understand their own motivations or Wikipedia's motivations well enough to clearly state what the problem is. They make a fuss on the talk page, rapidly oscillating between two orthogonal arguments:
  1. I am right, and the article is wrong.
  2. I have a viewpoint, and the article doesn't represent it.
Alas, the complainer is not an effective communicator, and it is never clear when they mean 1 and when they mean 2. Inevitably it will occur that one of their assertions of 1 is answered with Wikipedia policies they don't care about, or one of their assertions of 2 is answered with a content-related argument they don't understand. The complainer suddenly chooses this moment to do some meta-analysis, seizes on the incongruity, and feels misunderstood.
Anyway, we're already doing the right thing in directing people who seem to be hung up on 1 to the Arguments page. Melchoir (talk) 08:09, 27 March 2008 (UTC)
  1. I apologize for any discomfort my comment above has caused.
  2. Regarding the title of your question: Of course not. Anyone can choose to troll. However, while my comment might not be completely civil, I know of no definition of trolling under which it may be seen as such.
  3. If you walk into any conversation at an arbitrary point, you will always find various ways to misinterpret some statements. To understand a discussion one needs to follow it from the beginning and take each saying in context.
  4. While having the entire world disagree with one doesn't mean he's wrong, it does mean he is probably wrong. It means he has to acknowledge the fact that he might be wrong, and that he is the one who should get to the bottom of the issue to understand why he is in disagreement with everyone else. He can choose to keep his views to himself, or to explain his arguments (having already undergone the self-criticism process) to others. But no matter how you look at it, waltzing in and stating "you are all wrong and I don't care what you have to say" is just inappropriate.
  1. I doubt the entire world disagrees with me. I acknowledge I might be wrong; but I also might be right. What I am stating, that .999... is not equal to 1, makes sense and is logical to me. 208.255.229.66 (talk) 14:06, 28 March 2008 (UTC)
  1. So the context here is that anon did just that. People tried in various ways to explain the matter at hand to him, or alternatively, directed him to where he could explore this further. But anon ignored all appeals to reason. Different people respond well to different approaches. Some people, when stubbornly stuck in a certain mindset, get thrown off balance and reconsider their position when confronted with a harsh statement. All else having failed, I had to try.
-- Meni Rosenfeld (talk) 08:51, 27 March 2008 (UTC)
  1. I never reconsidered my position due to a harsh statement; not was I offended.
Don't assume that your "proofs" equal reason. The last time I tried to talk proofs here, you showed me one that forced .999... to equal 1 by multiplying both sides by zero. (And no one could see anything wrong with this!) All the other proofs here go around in circles, finding more and more elaborate ways to include the conclusion as some step in the middle somewhere. I fully acknowledge that I don't know calculus, and don't claim to truly understand .999.... But don't expect me to take your statements on faith when you keep making such blatant errors as this in the parts that I do understand. I think I'd be very foolish to try to learn calculus here. Algr (talk) 21:41, 27 March 2008 (UTC)
Well, you don't need calculus to prove the equality; calculus can be used, but it's not necessary. Simple algebra suffices; x=0.999..., 10x=9.999..., 10x-x = 9, x=1. Or, if the "shifting a zero onto the 'end'" bothers you, x=0.999..., x/10 = 0.0999..., x-x/10 = .9x = 0.9, x=1. --jpgordon∇∆∇∆ 21:51, 27 March 2008 (UTC)
If x=0.999..., and 10x=9.999..., then 10x-x = .999... 208.255.229.66 (talk) 14:06, 28 March 2008 (UTC)
If x=0.999... and 10x=9.999..., then
not 0.999... as you contend. 70.20.81.23 (talk) 14:35, 28 March 2008 (UTC)
Algr, out of curiosity I'd like to learn what you mean by circular reasoning. Which of the following fall under your meaning:
  1. Assuming X and using the assumption to derive X.
  2. Assuming results that necessarily imply X and using the assumption to derive X.
  3. Assuming X and using the assumption to derive results that necessarily imply X.
? Perhaps none, perhaps all, perhaps one or two? Melchoir (talk) 22:01, 27 March 2008 (UTC)
Algr, you claim someone tried to "prove" something by dividing by zero. I can't remember reading something like that. Could you provide a date or a link to the corresponding edit? -- Huon (talk) 22:13, 27 March 2008 (UTC)
Either I'm growing senile or you're making stuff up. Like Huon, I have no memory of anyone offering a proof which utilizes multiplication\division by zero. Please supply a link (preferably a diff).
It seems you have chosen to ignore my earlier observation that the proofs in this article rely on preliminary results which are currently not stated in Wikipedia, and possibly shouldn't be. I never suggested that you should learn "calculus" here. I fully respect your desire to achieve first-hand understanding on the equality, and contempt from fully accepting it without a firmer grasp of its foundation. But endless debates here are not the answer - only a study of a comprehensive textbook is.
I'll also reiterate that an encyclopedia isn't supposed to convince anyone of anything. It is supposed to report facts and cite them. The proofs we put in some articles, a concept unique to mathematics, is an added bonus. -- Meni Rosenfeld (talk) 23:13, 27 March 2008 (UTC)
I also have no recollection of such a "proof", but if it was made, I'm sure it was just a simple mistake (we are, after all, human). Had you pointed it out at the time, I'm sure it would have been immediately fixed. --Tango (talk) 23:57, 27 March 2008 (UTC)
I believe I found what Algr meant; see his comment here. Algr, if that was indeed the problem, then on the one hand Meni Rosenfeld answered your critique, on the other hand, if in that proof U-N were indeed zero, then N=U. So either there's no division by zero, or 0.999...=1. Unfortunately, Ohanian's proof is indeed not correct, but that's not because of division by zero.Huon (talk) 00:15, 28 March 2008 (UTC)
Here is a better link. Algr would do well to read up on Reductio ad absurdum and note that the multiplication occurred in a branch where N<U was assumed (thus ). Your mistake was pointed out at the time, but of course you have chosen to ignore it and state now that "no one could see anything wrong with this". Then you wonder why I lose my patience. -- Meni Rosenfeld (talk) 01:44, 28 March 2008 (UTC)
  • Hm. Don't assume that your "proofs" equal reason. Well, that hits it on the head. The equality of 0.999... and 1 in the reals is a purely mathematical issue, and in the realm of pure mathematics, proofs do equal reason. In a philosophical realm, I'm perfectly happy to consider 0.999... to mean the largest quantity that is less than 1; I don't know what the properties of that number are, other than it's less-than-oneness, and it's biggerness than any other less-than-one thing, but it's a perfectly valid philosophical construct. It just doesn't fit into mathematics because of the Archimedean property. --jpgordon∇∆∇∆ 06:32, 28 March 2008 (UTC)
I wouldn't be happy to consider 0.999... to mean that. 0.999... is a mathematical construction, a special case of the more general concept of decimal expansions of real numbers. You can't just decide that it means something else entirely.
That doesn't mean you cannot name the phenomenon of a number being just short of 1. Just like we use the term "nonzero infinitesimal" to describe an idea which happens to be impossible in the reals ( for every ), we can use a term such as "almost unity" to describe a quantity which is just short of unity (). Such a thing exists in the integers (where it is just the number 0), but not in the reals and in many other settings. But it's simply wrong to use the symbol 0.999... to refer to such a quantity. -- Meni Rosenfeld (talk) 09:55, 28 March 2008 (UTC)
.999..., philosophically, NOT mathematically, is not 1. I think that is the point I am trying to make, and most of you respond with "proofs" and such. Proofs do not influence philosophies; they are used in mathematics. 208.255.229.66 (talk) 14:06, 28 March 2008 (UTC)
Okay then, you're right. 0.999... doesn't look like 1, 0.5 doesn't look like 1/2, and 2+2 doesn't look like 4. All philosophically speaking, of course. However, since this article is all about the mathmatical equality, philosophical views are meaningless. (in the same way that, to your philosophical views, mathematical proofs are meaningless) --Maelwys (talk) 14:30, 28 March 2008 (UTC)
0.999... is a mathematical concept, so of course we use mathematical proofs to investigate it. "Philosophy" has nothing to do with it. How does philosophy define 0.999...? If it defines it in a different way than maths, then sure, it's going to mean a different thing, but it's going to mean a thing which isn't actually used by anyone. --Tango (talk) 14:32, 28 March 2008 (UTC)
I'll second that. If a philosopher ever wrote something on the subject of 0.999..., it might make an interesting addition to the article, but I doubt that. Why should 0.999... be something different philosophically than mathematically? And what should it be? -- Huon (talk) 14:40, 28 March 2008 (UTC)
How about "the closest approximation in the real world to the mathematical formulation 0.999..."? (Assuming there's a countably finite number of particles in the real universe with which to represent the expansion of the fraction. Or something like that.) Or whatever the philosopher wants, since philosophy is non-rigorous; some philosopher added just yesterday in the main article that 0.999... != 1 because there aren't any ones in 0.999... -- that isn't math, but it doesn't matter, it's some philosopher's essentially religious decree about the nature of equality. But this article is about the mathematical construct 0.999..., and the mathematical construct 1, and the mathematical construct of equality, so we can ignore philosophical conceptions and stay within our mathematical realms. That's why I brought up philosophy in the first place -- to make the distinction available so non-mathematical arguments can be easily dispensed with. --jpgordon∇∆∇∆ 15:26, 28 March 2008 (UTC)
I know you're just playing devil's advocate, but that doesn't make any sense - the mathematical formulation 0.999... is 1, so the closest real world approximation (whatever that means) is still 1. The only way I can see of defining 0.999... in such a way as to get the result a lot of people expect is simply to define it as "an non-zero positive infinitesimal amount less than 1" (which obviously isn't a real number, but it's easy enough to define). --Tango (talk) 16:04, 28 March 2008 (UTC)
That's math brain talking. Philosopher brain isn't bound by mathematical rules, and can say whatever it pleases about anything. Angels on the head of a pin, y'know? --jpgordon∇∆∇∆ 16:06, 28 March 2008 (UTC)
Yes, but you said "mathematical formulation" - if you're defining the philosophical concept in terms of the mathematical one, then it is bound by mathematical rules. --Tango (talk) 16:12, 28 March 2008 (UTC)
Not if I'm a philosopher who rejects mathematical proofs. The result, of course, is nonsense, but philosophers get to spout nonsense. --jpgordon∇∆∇∆ 16:18, 28 March 2008 (UTC)
Isn't it meant to be at least self-consistent nonsense? --Tango (talk) 16:19, 28 March 2008 (UTC)
One would hope. But that's why I'm a mathematician rather than a philosopher; it's a cold logical place where even if it doesn't seem to make any sense at all, if it's proven, it's true. E.g., Monty Hall problem. --jpgordon∇∆∇∆ 16:47, 28 March 2008 (UTC)
I prefer the Euler Equation as an example of such things. As Benjamin Peirce said: "It is absolutely paradoxical; we cannot understand it, and we don't know what it means, but we have proved it, and therefore we know it must be the truth." --Tango (talk) 17:06, 28 March 2008 (UTC)
If you want to talk philosophically, are you familiar with the sense and reference (or intension and extension) distinction? .999... and 1 clearly have different senses, but you still need to do some analysis to determine whether they have the same referent. In order to understand what you're talking about, you don't just assert that they don't have the same referent, like you don't assert that Hesperus Phosphorus just because they are different concepts. If you somehow find that they are not equal, then we are talking about different names altogether. But if you want to maintain that we are talking about the same names, then you will eventually have to agree that the two names ('.999...' and '1') refer to the same number.
The charge above that "philosophy is non-rigorous" is ridiculous. –Pomte 22:34, 29 March 2008 (UTC)

Mini, if I have "chosen to ignore" anything, it is because there are six different people here making different points, and I can't respond to them all. But you have all chosen to ignore my response to the argument that U-N was not zero, and gone on to make the same points again. Please go back and read that. The tactic that I keep running into here is that people will throw unjustified assumptions into their opponent's views so that they can "disprove" them when those assumptions prove false. It is like this argument:

1) Horses cannot ride on pavement without horseshoes.
2) Horseshoes cannot be fitted onto a car.
Therefore: Cars cannot ride on pavement.

It's a perfect "proof" for those who want to believe it. But the hidden assumption is that if cars have any similarity to horses, then they must equal them in all ways. The Pro-one crowd here has done this on at least two occasions:

1) The assumption that a value (U-N) that is defined as being indivisible would somehow respond normally when used in an equation. Neither infinity nor zero can be used in this way, but that does not prove that infinity EQUALS zero, so why assume that no other value with this property can exist?
2) The "last nine" assumption. While the Pro-one crowd insists that there is no "last nine" in .999..., one of the main proofs for .999...=1 makes a clear, (and very bizarre) prediction about it. In the proof that involves multiplying .999 by 10, how is it possible for 9.999... to have a different number of significant digits then .999...? This is completely unprecedented in any real number. This proof has an extra 9 appearing at the "end of infinity" to cancel out all other nines. If you count the significant digits in 9.999...-.999...=9, you get Infinity - Infinity = 1. Switching to .0999 as Jpgordon suggests does not fix this, you still somehow have differing numbers of significant digits.

- BTW, the anonymous posts above supporting != are not from me. Everything I'd written here is signed: Algr (talk) 22:55, 30 March 2008 (UTC)

Actually, Algr, the point is that there is no last digit in either of them, which is why when you subtract the one from the other all the 9s cancel out. In other words, since there is no "last 9" in either number, you can look as far as you like down the 9.000... that you get from the subtraction, and every decimal place will be 0. If there were some kind of "last digit" that you could get to, then that would be the place that a problem would occur, but since there isn't one there's never a problem. Like you say, infinity - infinity = 1, but it's more like 1 + infinity = infinity (i.e. sticking stuff in front of an infinite number doesn't change its overall length). Confusing Manifestation(Say hi!) 23:48, 30 March 2008 (UTC)
I haven't gone back and read the appropriate archive, so I won't comment on point (1), but point (2) is easy. The number of 9's in 0.999... is infinite, , in fact, and (see cardinal numbers). So, one does indeed have an extra 9, but they still have the same number of 9's - seems odd, but that's how infinity works (in fact, I think it can be taken as the definition of infinity). --Tango (talk) 23:53, 30 March 2008 (UTC)
On the topic of this thread, the question is "Does only the 0.999... side have trolls?" In its proper definition, a troll is someone who is deliberately out to cause trouble by arguing for the sake of arguing. And, in general, a troll will respond to all valid arguments with "No I'm right you're stupid". If someone came in here and went "0.999... = 1 you fags STFU" the response would, presumably, be something along the lines of "congratulations on getting it right, now please read up on civility" - in other words, trolling on the pro-1 side isn't likely to have much of an impact. On the other hand, someone steadfastly refusing to accept even a single argument in the article, even after every nitpicking detail is explained to them, will almost certainly provoke a response and will, after a while, feel like they're being deliberately obstinate in order to annoy people. Unfortunately, as a result of dealing with too many of this type of person, the staunch, battle-weary defenders of the pro-1 camp may find their AGF charm to be losing its power, and call troll much earlier than would otherwise be warranted. Confusing Manifestation(Say hi!) 00:04, 31 March 2008 (UTC)
Please read Real numbers#Axiomatic approach. If and then on one hand, and thus , and on the other hand, and thus . Therefore . There aren't any mysterious real numbers with bizarre properties that don't conform to these rules. -- Meni Rosenfeld (talk) 00:18, 31 March 2008 (UTC)
Concerning point (1): 0.999... is a real number, just as every other number we encounter in everyday life (such as, say, 0, 1, 2, 3, 0.5, the square root of 2, pi, e, and so on). Now you could argue that it should be something else - immediately leading to the question of what it should be, and a lot of related problems. The article even mentions infinitesimals, but that's all pretty bizarre and leads to a loss of properties we expect (for example there might then be numbers without any decimal representation, and there might be several numbers with claims to being represented by "0.999..."). Now if 0.999... is a real, as it should be, we get to use all the nice properties of the real numbers. They form a field. Thus, we can subtract, and the result is again a real number. Furthermore, we can divide any real number by any non-zero real number (with the result again a real number). So if we assume N<U, we know that U-N is a non-zero real number, and we also know that we can divide by it. -- Huon (talk) 00:32, 31 March 2008 (UTC)
I think what Huon's trying to explain is that the proof is using a contradition: start by assuming that N<U, then for the rest of that proof you can always treat N and U as being unequal numbers. It's only when you reach the end and find yourself in a contradiction that you can say "the problem was in the assumption, so in fact N = U". Confusing Manifestation(Say hi!) 04:04, 31 March 2008 (UTC)

Section break - back to trolls

Also, again on the troll point - there's a world of difference between "I've read through all of these proofs, but I'm still not convinced that they haven't made some hidden error somewhere" and "I don't care how many proofs you show me, common sense says 0.999... doesn't equal 1 and that's that" - the first is at least slightly open to the possibility that (a) there aren't any hidden flaws, except in their understanding, or (b) there are flaws, but the conclusion may still be valid if those flaws can be dealt with, whereas the second is essentially saying that they completely discredit the entire framework of modern mathematics, since the methods used to prove the equality are the same as those used to prove pretty much everything else. You can see how the second one sounds a lot more troll-y than the first, much in the same way "I don't understand how life could have formed out of a random amalgamation of chemicals" is less trolly than "I ain't kin to no ape!", or "I don't agree with the policies of the Bush administration" is less trolly than "George W Bush is a WP:BLP violation preemptively redacted!" The tricky bit, though, is when they say the former, but they say it repeatedly and without listening to any arguments presented, to the point that you start to feel that they're really saying the latter - that's a particularly nasty form of trolling. Confusing Manifestation(Say hi!) 04:29, 31 March 2008 (UTC)

Let's talk about 0.333... and 1/3 for a while - and maybe pi, too!

In the discussions of whether 0.9~ is equal to 1 or not, some people have claimed that 0.3~ is not equal to 1/3. The specific expression I've seen on more than one occasion is something like "0.3... is an approximation of 1/3; they aren't actually the same." So, I've created this section for us to discuss whether or not 0.3~ is, in fact, equal or not equal to 1/3. Before I weigh in, there's a few suggestions I'd like to make for those of you who want to participate:

1. Even though this page is meant for discussion of .9~, and the discussion of .3~ in this context is very closely related to it, I'd like to keep this section focused on .3~ and 1/3.

2. As this is a discussion of math, the debate should mostly be carried out through proofs, equations, and logic. Arguments based on simple contradiction, vague philosophical notions, and stubbornness will not persuade anyone who doesn't already happen to agree with you.

3. If you know, right now, that no proof or argument will change your mind, then you really shouldn't spend your time reading proofs and arguments - or replying to them. You should probably reconsider why you're even here. Personally, I would love to be proven totally, irrefutably wrong; it would be a mind-opening experience!

4. Let's keep the discussion limited to the set of real numbers. I'd rather not bring in alternative numbering systems if I can help it. I don't use them, you see, and I doubt you do either.

So, are we all ready to discuss math in a calm and logical manner? Good! Let's get started.

It is my observation that any real value can be expressed as a decimal number. However, not every value may be expressed as a finite decimal value. Pi is a great example.

We will never know the precise value of pi expressed in decimal, because it would require infinite space to express; as it happens, we don't actually need to know its precise value to work with it. The fact that a unit circle has a circumference of 2π inches or that radians may be measured in terms of π does not change based on how far we calculate π. Thus, π is not an approximation. It is a symbol for the ratio of a circle's diameter to its circumference. (3.14159, on the other hand, is an approximation.)

Enough about pi. If you long-divide 1 by 3, your answer will at first be 0, and then 0.3, and then 0.33, and so on. You and I both agree that no matter how long you keep dividing, you'll just see more 3's at the end. There isn't going to be a surprise 4 that pops up after a while. We don't have to divide until the end of time in order to know that 1 divided by 3 results in an infinite amount of 3's following a decimal point.

Division does not produce approximations. Even if you stop early in a division problem and take a remainder, the incomplete quotient may be added to (remainder/divisor) to produce the precisely correct value. In the case of 1/3, we don't need to stop early or take a remainder; we know that the result, expressed in decimal without stopping early, is infinite 3's after the decimal. Thus, .3~ is a symbol, not an approximation, of the infinitely repeating decimal which is the exact result of dividing 1 by 3.

I await your replies eagerly. Gustave the Steel (talk) 05:43, 30 March 2008 (UTC)

4. I don't really _use_ real numbers, and I doubt you do either. Otherwise you wouldn't care about decimal approximations of pi.
So 1 divided by 3 produces an infinite sequence of 3's after decimal point. How does that prove that .3~ is not just an infinitely good approximation? Tlepp (talk) —Preceding comment was added at 19:50, 30 March 2008 (UTC)
I'm a maths student, and I certainly use real numbers, and generally don't care about decimal approximations of pi (it's 3 and bit, that's all I need to know, and I rarely need that). They might not be too useful for physical quantities, but they're useful in more abstract things. And, if we're sticking to real numbers, the only "infinitely good approximation" is being precisely correct, by the Archimedean property. --Tango (talk) 19:59, 30 March 2008 (UTC)

(outdent)I know nothing about calculas or real analysis or anything of the sort, in fact I'm only in 8th grade math, so if the following makes no sense from a calculas POV you'll need to explain it to me in a way I'll understand (I learn quick though). I beleive that we can prove that 0.3~ is equal to 1/3 if we use digital manipulation (like for the 0.9~ proof) and without having to resort to complicated proofs.
Let x=0.3~
10x=3.3~
9x=3
x=1/3
Q.E.D.
I think this is enough to prove that 0.3~=1/3.--Sunny910910 (talk|Contributions|Guest) 00:09, 31 March 2008 (UTC)

Yes, this is correct. However, it relies on a few features of decimal expansions which themselves are not as easy to prove, thus it isn't likely to convince any opponent. -- Meni Rosenfeld (talk) 00:33, 31 March 2008 (UTC)
Good point. So what are these "features of decimal expansions" (or which are these)?--Sunny910910 (talk|Contributions|Guest) 00:36, 31 March 2008 (UTC)
The standard (faulty) counter-argument is that 10x can't equal 3.3~, because if it did you'd mysteriously have gained a 3. In effect, you'd have to prove how an infinite decimal expansion behaves under multiplication by 10. I believe Gustave tried to prove 1/3=0.3~ in some other way in order to then convince those who doubt the digit manipulation proof for 0.9~. -- Huon (talk) 00:55, 31 March 2008 (UTC)
[ec] Basically you need a theorem that will guarantee that long multiplication and division work as we are used to not only for finite decimal expansions, but for infinite ones as well. -- Meni Rosenfeld (talk) 00:58, 31 March 2008 (UTC)
For long division, I'm wondering if simple induction will convince anyone who doubts it.
Let be the decimal representation of 1/3. We can find the 's using long division or the division algorithm.
To find the : Since the divisor is 3 and the dividend is 10, it follows that the quotient , the remainder is , and the next dividend is 10.
To find the : Since the divisor is 3 and the dividend is 10, it follows that the quotient , the remainder is , and the next dividend is 10. And so on, i.e.
Suppose for some natural i and the dividend is 10. It follows that the quotient , the remainder is , and the next dividend is 10.
By induction, for all natural n, and so 1/3 = .333... –Pomte 01:27, 31 March 2008 (UTC)
I think that's a proof that *if* 1/3 has a decimal representation, then it is 0.333..., you still need to prove that non-terminating representations do work (which they do because of the convergence properties of 3/10+3/100+..., but that's non-trivial). --Tango (talk) 14:10, 31 March 2008 (UTC)
Good point. Here's one way to show that every decimal representation (of the form described below) has a value, by interpreting them as sequences. To reject this, one would have to reject sequences, probably by saying something really unintuitive. This proof looks non-trivial, but I don't think it's hard to understand.
Let for all natural n
Let for all natural n
Then every is a finite decimal representation. Consider the sequence
  1. is a real number because it is a sum of real numbers, and the set of real numbers is closed under addition. So is a real-valued sequence
  2. is increasing because
  3. is bounded above by 1 because
    (there's a finite number of 9s here)
converges to a real number by the the above three facts and the monotone convergence theorem. So every infinite decimal representation (with each decimal having a value from 0 to 9) is real-valued. –Pomte 14:35, 31 March 2008 (UTC)

Not another one...=

.999...=1 .999...8=.999... .999...7=.999...8 etc .000...1=.000...2 .000...1=.999... 0=1

Before you say anything about my flawed representations (.999...8), let me clarify. I used those because I don't know the notation. What I mean by .999...8 is .999... with an 8 at the "end" instead of 9. I also realize there is no real "end" to .999..., but that's a terminology problem, and you know what I mean by it. 68.43.151.226 (talk) 22:13, 7 April 2008 (UTC)

Or, to rephrase: .999...=1 because there is no number between .999... and 1. Therefore, there is no difference between .999...8 and .999..., so .999...8=.999...=1. .999...8=1 because there is no number between them. I hope this settles the thing...
What's the "end" of 0.999...? The point is that there's one "9" for every natural number. There's a nine at the first position, at the second position, at the third position and so on. But for there to be an end, there'd have to be a last natural number. If your "0=1" proof is supposed to work, you'd even have to show not only that there's a last natural number, but that there are finitely many natural numbers. Huon (talk) 23:06, 7 April 2008 (UTC)
Let's not oversimplify. There is nothing wrong with 0.999...8 as a formal string, with a 9 at every position corresponding to a natural number (finite ordinal), and an 8 at the position corresponding to ordinal ω (the smallest infinite ordinal). This formal expression just doesn't happen to be a decimal representation of a real number. --Trovatore (talk) 23:25, 7 April 2008 (UTC)
It seems like a better notation for the string would be "0.999...;8". Not sure if I have a point to make, just saying. Melchoir (talk) 00:25, 8 April 2008 (UTC)
Just something to ponder:
Cheers, silly rabbit (talk) 23:30, 7 April 2008 (UTC)
By the Calculus of limits theorem:
So that's pretty clear. I'm not sure it's a significant fact, though. --Tango (talk) 23:53, 7 April 2008 (UTC)
I had intended it as a snippet to satisfy the skeptical. I'm not sure Trovatore's remark is helpful here, since it really does leave the realm of real numbers behind. One possible interpretation of the IP's question is whether this limit is also unity. I was pointing out that it is. silly rabbit (talk) 00:15, 8 April 2008 (UTC)
My point is that I seriously object to telling "lies to children". The string is not a decimal representation of a real number, according to the standard way of representing real numbers by decimals, and that's the correct thing to say about it, not that it doesn't make sense per se. --Trovatore (talk) 00:34, 8 April 2008 (UTC)
Fair enough. Upon reflection, I see how this is a valid interpretation of the IP's suggestion. silly rabbit (talk) 01:12, 8 April 2008 (UTC)

Comprehending Infinity

It's quite amusing to read the arguments going back and forth throughout these Talk pages. Before I even found the 0.999... page, there was no question in my mind that 0.999...=1 based upon, ironically, the same foundation to which most of the detractors cling: Common Sense.

I think the inability to accept the equality of 0.999...=1 is not so much a problem with numbers as taught in school, but rather in an erronious perception of Infinity. Every "argument" against the equality on these Talk pages makes the assumption that there is some small distance – however small – which separates 1 and 0.999... on the number line. However, because the number of nines in 0.999... is infinite, this means that the space between 1 and 0.999... on the number line is also infinitely small.

So what people are really refusing to accept is that infinite is truly not finite. For those who think about it as a process, the argument that 0.999... is a series that never quite reaches one implies that they think the series stops somewhere because of a finite perception of the time required to add them to the series; but the series doesn't stop, ever. And this is the crux: if you indeed had an infinite amount of time to write nines, would you ever finally write a number which was exactly equal to 1? If one truly understands the concept of Infinity, the answer is, of course, yes. Why yes? Because there is no "finally" in infinite time. You would never stop writing, and by doing so you would write a number equal to 1.

Even some of you who agree with the equality may find yourself skeptical of this, but if you read it carefully, you'll find it is exactly the same equality, just translated into a physical action rather than an abstract concept. Your same difficulty in comprehending actually "reaching one" after infinite time is the same concept others struggle with in comprehending a decimal of infinite length. You would never reach one, because reaching one implies an end, or finite time. Given infinite time, you would write a number equal to 1.

For many with a sound background in mathematics, it is much easier to accept that a string of numbers goes on forever without end. Those same individuals may find it much more difficult to accept that an infinite amount of time is truly time going on forever without end, rather than a very-large-as-to-be-incomprehensible yet still finite amount of time. The point of this post (at last!) is that hopefully this thought experiment offers some insight into why so many people are vehemently opposed to 0.999...=1 Infinity existing entirely outside the realm of finite numbers is just hard for some to comprehend. GreyWyvern (talk) 15:21, 16 April 2008 (UTC)

Yes, I think that's why intuition fails to give the right result - infinity is not very intuitive. However, that's a reason why people don't understand it before it's explained to them. Once it's explained to them, if they still don't accept it, there is more going on - an reliance on intuition over rigour. If you're not willing to accept that your intuition could be wrong, you're never going to understand the numerous counter-intuitive parts of maths. --Tango (talk) 15:46, 16 April 2008 (UTC)
I think you're oversimplifying. You are essentially posing the question, "what would we be able to do, in our physical universe, if we had infinite time?", and suggesting an answer, "we would be able to reach 1". But the true answer is "we don't know.". We just don't know enough about our physical universe to answer this question. All of humanity's knowledge comes from collecting a finite amount of data over the course of a finite period of time in a finite volume of space. We just have no means to faithfully extrapolate what would happen over an infinite period of time. This observation is not a failure to comprehend infinity, but rather an acknowledgement of the fundamental limitations of humanity's ability to learn about nature. Thus your physical interpretation doesn't really hold much water. The only setting in which we can meaningfully reason about infinity is an abstract mathematical model, and we have no shortage of definitions, axioms and theorems to do so. -- Meni Rosenfeld (talk) 16:17, 16 April 2008 (UTC)
"what would we be able to do, in our physical universe, if we had infinite time?" The answer is not "we don't know"; rather it is: everything which is physically possible, despite how incomprehensible, including writing a number equal to one by writing nines after a decimal point. But now we're getting into philosophy. :) The key point to take away is that even to those who casually accept Infinity when it relates to a string of digits can still stumble over it when relating Infinity to other things such as time. I'm just giving a possible explanation as to why there seems to be such a logical disconnect between those who agree and those who disagree with the equality. GreyWyvern (talk) 16:31, 16 April 2008 (UTC)
I think you have missed the point of my comment. -- Meni Rosenfeld (talk) 18:31, 16 April 2008 (UTC)
What was it then? Do tell :) GreyWyvern (talk) 20:49, 16 April 2008 (UTC)
I've already explained it as well as I could, so we'll have to leave it at that. -- Meni Rosenfeld (talk) 21:41, 16 April 2008 (UTC)

The trouble with "common sense" is that it can sometimes err. There are number systems in which 0.999... (provided it can be properly defined) is not equal to one. These are, of course, not the real numbers. But real numbers are not necessarily the most common sense system available. In principle, one could perform an infinite number of tasks in a finite time (see supertasks). In such situations, it is perfectly acceptible to consider the "next" task afterwards. silly rabbit (talk) 01:13, 17 April 2008 (UTC)

"what would we be able to do, if we had infinite time?" I think the mathematically correct answer is anything, including things that are physically impossible. "If false then P" is true statement. So if we had infinite time to write 9's, we would eventually write not only number equal to one, but number equal to two also. Tlepp (talk) 05:19, 17 April 2008 (UTC)
If you assume that having infinite time is logically impossible, then yes, but should we assume that? --Tango (talk) 13:59, 17 April 2008 (UTC)

Basic Conclusion?

0.9999 and all processes and arguments work if 0.999... is not 1, but it works equally if it equals one. Goldkingbot (talk) 23:43, 22 April 2008 (UTC)

I'm really not sure I agree with the statement that stuff still works if 0.9~ is not 1. For example:
0.9~ != 1
(0.9~)/3 != 1/3
0.3~ != 1/3
And yet, by division,
1/3 = 0.3~
So, one consequence of 0.9~ not equaling 1 is that division is no longer a meaningful mathematical operation. Gustave the Steel (talk) 00:32, 23 April 2008 (UTC)
The other consideration of course is that the proof that 0.999... = 1 exists and will not go away. We can't just "decide" not to make it true any more, any more than we can "decide" that we don't want 1 + 1 to equal 2. It's not simply a matter of convention, it's a direct result from the axioms and definitions of the real numbers. Maelin (Talk | Contribs) 05:40, 23 April 2008 (UTC)
Technically, it's defined as such. Any "proofs" only prove the definition, if you will - which is the limit. Tparameter (talk) 13:45, 6 May 2008 (UTC)

1/3 = 0.3~ is insulting

Well I tried to stay out of this, but I guess I'm back. Concerning "Basic Conclusion?" above, of all the .999... arguments, 1/3 = 0.3~ is the most insulting. The central question is what does it mean for a decimal to repeat infinitely, so how can you expect to prove anything by simply trotting out another repeating decimal? The failure to see that this is the same question tells me that some people just don't understand logic and can't question grade school assumptions. (And before you call this flaming, how many people has the main article insulted with it's insistence that they "Just don't understand infinity"?) 1/3 = 0.3~ looks right, but so does Pi=3.14. From there, they go on to bury the same circular assumptions in deeper terminology, (limits) and when someone tries to point this out, the answer is always "you don't understand, how dare you question a real mathematician!" (BTW, IS anyone here a "real mathematician"?) This is how con artists react when they are caught, so is it any wonder that there is so much skepticism in education? I'm betting some of the best minds are turning away from higher mathematics because they don't want to spend their lives arguing dogma about angels on the head of a pin. Algr (talk) 19:52, 7 May 2008 (UTC)
You'll lose that bet. -- Meni Rosenfeld (talk) 20:25, 7 May 2008 (UTC)
Actually, I imagine a lot of people do turn away from higher mathematics because of the sort of dogma required; mathematical proofs don't leave a lot of room for common sense, nor should they. On the other hand, the lack of wiggle room in a proper proof -- i.e., dogma -- is exactly what attracts a lot of people to mathematics as well. The idea that the provably incorrect utterance "pi=3.1.4" is at all similar to the demonstrable and provable "1/3 = 0.3~" does indeed show the misunderstanding that you're insulted to be reminded of. --jpgordon∇∆∇∆ 18:42, 8 May 2008 (UTC)
Yes, I'm sure you're right - lots of people aren't turned away from maths like that. However, I would describe those people simply as people that don't like (higher) maths. Of course people that don't like maths are going to be turned away from maths, that's not something to worry about. --Tango (talk) 19:12, 8 May 2008 (UTC)
Here's the problem: 1 divided by 3 yields infinte 3's after the decimal. Therefore, either 1/3 = 0.3~ or division produces erroneous results. Whether or not repeating decimals are equivalent to rational numbers isn't actually a question at all. We've set up our mathematical system based on a certain set of rules, and the equivalence of 1/3 and 0.3~ (and all resulting implications) is, provably, a consequence of those. Gustave the Steel (talk) 20:42, 7 May 2008 (UTC)
I guess part of the problem is this: When people thought that there were only four elements, you could do physical experiments to show that that model of the world couldn't explain things that the more complex atomic model could explain. But there is no real world experiment where the imaginary unit or the millionth digit of Pi, or an infinitesimal could produce a real world result. Therefore it is much harder to "clean house" of the articles of faith that slip into accepted dogma. But Paradigm Shifts happen - a house of cards can't stand forever. Algr (talk) 19:52, 7 May 2008 (UTC)
Nonsense. Refuting a mathematical mistake is much easier than a physical one - you only need to point out a mistake in a proof.
Algr, you have criticized the article plenty of times, and yet I still have no clue as to what it is exactly that you are complaining about. Do you think that:
  1. The symbol 0.999... should not be taken to mean "the real number represented by the decimal expansion with 0 everywhere to the left of the decimal point and 9 everywhere to the right"?
  2. Said real number is not equal to 1?
  3. Proofs of the equality in mathematical literature are lacking?
  4. Proofs of the equality in the article are lacking?
  5. Something else? (please elaborate)
You can choose more than one, but I'll prefer to stick with one issue so that we can clarify it. I'll give you a down payment for some of my possible replies:
  1. That's like saying that "+" shouldn't represent addition, that "2" shouldn't represent the integer following 1, or that "√" shouldn't represent a square root. Yes, in some specialized contexts, "+" can mean boolean OR or XOR, as well as other things. But in normal cases it just means addition. The same goes for 0.999... - it just means the real number.
  2. This is a proven mathematical theorem; see #3.
  3. Pick any respectable book in which a proof appears and show me an error.
  4. I agree completely. Wikipedia is an encyclopedia, not a mathematical textbook. It needn't contain proofs at all, and those that do appear needn't exhibit spectacular rigor.
-- Meni Rosenfeld (talk) 20:25, 7 May 2008 (UTC)
Regarding your latest question: Most of the PHDs were scared away by the silly arguments. I essentially get paid for conducting mathematical research, so you could call me a real mathematician. -- Meni Rosenfeld (talk) 20:28, 7 May 2008 (UTC)
The only assumptions involved in the 1/3 * 3 = 1 proof are that division always works in the way we learnt at primary school and that infinite decimals work in the same way as finite ones. Given both those assumptions, 0.999...=1 follows at once. To be strictly rigorous, you can't just assume those, you have to actually prove them from more basic axioms, but for an intuitive proof for someone not studying maths at university level, those assumptions are perfectly reasonable. --Tango (talk) 22:14, 7 May 2008 (UTC)

I'll do one more thing: I'll present here a different proof of the equality I've dug up from the archives. If there is any step you think is invalid, say so and we'll continue from there.

  1. 0.999... is not greater than 1. In other words, 0.999... <= 1.
  2. So 1 - 0.999... >= 0. Denote c = 1 - 0.999...
  3. Let n be any natural number.
  4. There exists a natural number m such that 10^m > n.
  5. 10^(-m) < 1 / n.
  6. 0.999... is greater than 1 - 10^(-m).
  7. Therefore, c < 10^(-m).
  8. So c < 1 / n.
  9. From 3-8 it follows that c is less than any number of the form 1/n, where n is a natural number.
  10. According to the Archimedean property of the real numbers, there is no positive real number with the property that it is less than any number of the form 1/n, where n is a natural number.
  11. Therefore, c is not positive.
  12. Since c >= 0, and is not positive, c = 0.
  13. 1 = 0.999...

QED. -- Meni Rosenfeld (talk) 11:29, 8 May 2008 (UTC)

I like. Tparameter (talk) 17:48, 8 May 2008 (UTC)
I like this proof too. It's better than digit manipulation proofs. I'd only like to add following comment. Real numbers are a standard or convention, something that is agreed upon among mathematicians. One part of this standard is archimedean property, which often takes the form of least upper bound property or Dedekind completeness. Thus archimedean property is not something one should try to prove a priori. Tlepp (talk) 21:23, 8 May 2008 (UTC)
Yes, and that's where we hit a problem. Mathematicians know how useful completeness is and don't want to define the real numbers in a way that doesn't include it - laymen don't. It's very difficult to convince a layman that we can't just define the real numbers in a different way that doesn't include the Archimedean property (well, we can, but we wouldn't want to). --Tango (talk) 21:48, 8 May 2008 (UTC)
Meni, that proof is idiotic. Did you even read the Archimedean property article before siting it? It says in the second line that the property assumes no nontrivial infinitesimals, so by using it you simply declare c to be zero when you were supposedly exploring the possibility that it wasn't. You are spinning around in the same circular assumptions yet again! It also happily states that there are plenty of fields that are non-Archimedean, and it describes exactly the relationship between infinitesimals and reals that I have been struggling to explain to you, and you seem unable to comprehend. Algr (talk) 22:32, 8 May 2008 (UTC)
The property doesn't assume the non-existence of non-zero infinitesimals, it *is* the non-existence of non-zero infinitesimals, and can be proven using completeness, there is no need to assume it. There are non-Archimedean fields, but not any complete ones, and completeness is a very desirable property. --Tango (talk) 22:40, 8 May 2008 (UTC)
<sarcasm> Your constructive and civil criticism is welcome. Simply stating that you disagree with step 10 would never have gotten your message across.</sarcasm>
This is exactly why I asked what you object to. 0.999... denotes a real number, so of course I will use the Archimedean property which is part and parcel of the real numbers. If you disagree with 0.999... being a real number, of course the proof is irrelevant. -- Meni Rosenfeld (talk) 22:57, 8 May 2008 (UTC)

Here is my objection:

No one would ever go to the bar and ask for .999... of a beer. No one EVER says .999... when they are describing ONE of anything. If someone stumbles across the symbols .999... and starts to wonder what it means, they inevitably will start thinking about infinitesimals. (Although they likely won't know the name, and hence will probably ask some confusing questions.) The logical thing to do with such a person is tell them about Hyperreal numbers and infinitesimals. It is obnoxious to instead lock the discussion into a number set that is defined as being incapable of supporting the property they have discovered. It is like insisting that 1/2 = 0. That statement is true - if you refuse to look outside of natural numbers. You can similarly force .999...=1 by refusing to think outside the real set. But when you do these things, all you are doing is blinding yourself to the intent of the question. In my board game, I used hyperreal numbers, and everyone understood what I meant without my having to explain anything. Blue = 2 to 2.999...; Red = 3 to 3.999... etc. No one was confused by those rules, and if someone had tried to pull out the Archimedean property to claim that 2.999... might actually be Red, well no one would want to play with that person. Real numbers don't matter. Real people doing real things is what matters. Listen to them. Algr (talk) 22:32, 8 May 2008 (UTC)

No, in your game you just used poor terminology. You should have just said "Blue is greater than or equal to 2 and (strictly) less than 3". It's obvious what you meant, that's why no-one had a problem with it, but that doesn't make it any less wrong, it just means it doesn't really matter. You can use hyperreals for things if you like, but people rarely want to - if you're doing a calculation about something in the real world and the answer comes out to be "0.999..." then it means "1". The real world works in real numbers (or, to be more precise, is accurately modelled by real numbers). From a mathematician's point of view, using something like hyperreals rather than reals means giving up certain useful properties (like completeness, or the existence of multiplicative inverses, or various other things depending on what alternative to the real numbers you choose). You can give up those properties if you like, but it seriously limits what you can do. --Tango (talk) 22:48, 8 May 2008 (UTC)
The game had 12 classes, your suggestion would have turned a simple table into a whole page of impenetrable text. What is the point of being "right" if you make the game becomes almost unplayable. People "rarely want" chemotherapy, either, but there isn't much sense in pretending IT doesn't exist. If you think the real world works on real numbers then you have had your head in the mathematical clouds too long. In the real world, Pi and .999... are meaningless after about 20 decimal places, no two objects can ever truly be equal because of vacuum fluctuation, and no infinite process can ever conclude anyway. Algr (talk) 23:23, 8 May 2008 (UTC)
Right, no one would ever understand . -- Meni Rosenfeld (talk) 23:28, 8 May 2008 (UTC)
Great point. Moreover, I don't think hyperreals are quite in the same realistic class as chemotherapy. Tparameter (talk) 17:09, 9 May 2008 (UTC)
Once again you toss fancy technical terms without having the slightest idea what they mean. If you think this has anything to do with hyperreal numbers, you are sorely mistaken. I'll say this slowly so you can understand: Decimal expansions in general, and 0.999... in particular, have nothing to do with hyperreal numbers. Decimal expansions only make sense in the real numbers, so of course 0.999... should be taken to mean a real number.
If you think that by saying "2 to 2.999..." you have found some magical shortcut to saying "at least 2 and less than 3", you are once more sorely mistaken. Even if we did take some hyperreal number and denote it by 2.999..., there is still another hyperreal between it and 3. You don't have a greatest hyperreal smaller than 3, just like you don't have a greatest real smaller than 3.
0.999... is a mathematical entity, and should be investigated in mathematical terms. A person asking for beer at the bar, or a bunch of idiots giving it some whimsical interpretation for the purposes of a board game, don't effect what it is. If anyone uses the symbol 0.999... to mean the greatest number smaller than 1, he is wrong. Wrong. Wrong.
Wrong wrong wrong? Are you sure you don't mean .WrongWrongWrong... ? Algr (talk) 23:23, 8 May 2008 (UTC)
I've said it before, and I'll say it again. You apparently have very little mathematical background; You use terms you know nothing about but think they somehow defend your position; You think that mathematical definitions and theorems should conform to the "word on the street" amongst laymen who share your ignorance; And you come to an uncontroversial mathematical article, denounce the information it contains, and insult people who have a greater understanding than you (and would be happy to help if only you had made any sensible claim). Please, do us all a favor. Go learn some math, or go home. -- Meni Rosenfeld (talk) 22:57, 8 May 2008 (UTC)
Math is just another game like my board game. It has it's rules, and its uses, but also a ton of hubris. Did you seriously call this article "uncontroversial"? That is just delusional, the article itself admits that it is controversial. Meni , you are just going to have to accept that I understand hyperreals better then you do. Algr (talk) 23:23, 8 May 2008 (UTC)
Thanks, I needed a good laugh. So tell me, mister understander, what exactly are the hyperreal numbers? -- Meni Rosenfeld (talk) 23:28, 8 May 2008 (UTC)
And which hyperreal number is 0.999...? -- Huon (talk) 23:45, 8 May 2008 (UTC)
On a more serious note: I hope this was a joke. You and I both know that you have no idea what hyperreal numbers are. You just heard somewhere that they have nonzero infinitesimals and you think this somehow saves your lost cause. Not only that, you lack the mathematical background to understand an introductory article like hyperreal number. Please, the only thing you can do now that will indicate any sort of maturity or intelligence on your part is to go away. Coming back with more force just because you are annoyed with what I've just said will make you childish, pompous and foolish. -- Meni Rosenfeld (talk) 12:02, 9 May 2008 (UTC)

After my last edit I thought about how fast tempers started flaring here, so I've decided to ::have some tea:: I'll read what you wrote in a day or so. Why don't you guys spend some time off as well and catch some air? Algr (talk)

Yes, math is just another game, and like any other game it has its rules. This article is "uncontroversial" among people who know (the) rules. Unfortunately game of math is played by almost every person on the planet, and nobody knows all the rules. In the beginning of the game we are told minimal amount of rules, and as the game goes further new rules are established as required. Professional mathematicians are actively seeking for new rules. This article describes one crucial point of the game. The length of the talk pages should convince anybody that new rule is required. Since this part of game was first played centuries ago, there is common agreement on what rule to make. And this is where the article fails: it doesn't describe the nature of the game, it doesn't say we are making a new rule, it doesn't respect reader's intelligence. Instead some people insist on dogmatic "teach by the hammer" pedagogy. I know encyclopedia is not a textbook, but still, can we change the article to describe this nature of game? Tlepp (talk) 09:38, 9 May 2008 (UTC)
I don't think the article should dwell on why we use real numbers instead of the hyperreals, surreal numbers or any of the other number systems. That's beyond the scope of an article on 0.999... We already have a short paragraph on other number systems and infinitesimals, with the surreal numbers probably the most relevant; maybe we could expand it a little. But once we accept the real numbers, 0.999...=1 is not a "new rule", but a consequence. Huon (talk) 10:34, 9 May 2008 (UTC)
I'd say that the mathematical analogy of rules for a game are axioms. One could base the entire mathematics on ZFC - he would start with this relatively short list of axioms and axiom schemas, and continue with introducing new definitions and proving theorems. This is not "seeking new rules", it is exploring the consequences of pre-established rules and naming them. In chess, things like discovered attacks, pins, forks, skewers, sacrifices, zugzwangs, zwischenzugs, tempo, control of center, king safety (and the list goes on) aren't new rules only available at an advanced level - they are just names given to certain situations that are interesting in light of the rules. A person who has mastered the ZFC axioms doesn't need a "new rule" to establish the value of 0.999... - he just needs to know what is the entity that 0.999... is a name for. -- Meni Rosenfeld (talk) 11:45, 9 May 2008 (UTC)
<sarcasm>You are absolutely right, a person who has mastered the ZFC axioms just needs to know that 0.999... is a name for entity 1, and then it is easy to prove that 0.999... equals one.</sarcasm> A person needs archimedean property, and from layman's point of view it doesn't make any difference whether you call it rule, axiom, definition, consequence of completeness, or zugentaswangenswugen. In any case it's a convention agreed upon by mathematicians. Tlepp (talk) 12:46, 9 May 2008 (UTC)
No, 0.999... is a name for the real number represented by the decimal expansion with 0 everywhere to the left of the decimal point and 9 everywhere to the right. And it's not a name bestowed uniquely to 0.999... - it is given collectively to all repeating decimal expansions.
Saying that "0.999... = 1" is a convention is just like saying that "1 + 1 = 2" is a convention. Sure, the choice of symbols for representing "1", "+", "2" and "=" is a convention. But the result itself is a necessary consequence of those notational conventions. So why won't Algr and his peers redirect their energies to the article 2 (number)?
You have completely missed the point of my chess analogy. There is an essential difference between rules which are "arbitrarily" chosen, and results which are universal and will be found by anyone investigating the rules, and will only be named differently. -- Meni Rosenfeld (talk) 13:14, 9 May 2008 (UTC)

My calculator disagrees!

Courtesy of one of my high school math teachers:

Take a calculator. Press "1" "/" "3" and "=". Then press "x" "3". With many calculators you will not get 1.

Not sure if this is worth mentioning in the article, since it's more a lesson on knowing the limitations of your methods, but... —Preceding unsigned comment added by Somedumbyankee (talkcontribs) 23:50, 10 May 2008 (UTC)

Might be worth adding "(due to rounding errors, calculators may not get this calculation correct)" to the appropriate place. It's not a serious issue, but it wouldn't hurt to address it. --Tango (talk) 00:06, 11 May 2008 (UTC)
Amusing how this got kicked off the "serious talk page" very quickly and I got a nastygram for doing original research with a calculator. No, it's not particularly important for "pure math" applications, but it is relevant for engineering and other applications where computers are used. I'm not going to push it, it appears that there are some sharks in the water on this particular page.Somedumbyankee (talk) 01:04, 11 May 2008 (UTC)
I think someone thought you were trying to make a serious objection to the fact that 0.999...=1. There are so many people that do (and often for reasons just as nonsensical as yours) that it becomes difficult to spot when someone is being deliberately ironic. --Tango (talk) 01:09, 11 May 2008 (UTC)
Sarcasm doesn't mix with math without other elements anyway.Somedumbyankee (talk) 02:14, 11 May 2008 (UTC)
I'm afraid I have to step forward as the culprit of this lapse of humour. I saw the exclamation mark and immediately took it to be an argument against the proof. The problem with this subject is that it's so confusing that it's hard enough to work out what the importance of the result of an argument is, let alone an encyclopedic one. I'm not sure how I might be able to inject some romance into the discussion to make things right ;)
Anyhow, the result of using a calculator is pretty inconclusive. If doing the addition above doesn't give you 1, it gives you 0.999…, and as we know this is equal to 1, in an alternative decimal notation. I think you're right. If academics haven't taken the time to discuss this approach then we probably don't need to either. BigBlueFish (talk) 14:37, 11 May 2008 (UTC)

Let's do some math

1/3 = 0.3 + 1/30 = 0.33 + 1/300 = 0.333 + 1/3000 = 0.3333 + 1/30000 = ...

If we define two sequences

then for all non-negative integers n. Then we must answer following questions.

  1. Do these sequences converge?
  2. If the sequences converge, is the limit unique? In this case we can write
  3. Is the sum Q + R equal to 1/3?
  4. Is the number R equal zero?

If the number system we use has 'nice' topology, we can expect following answers: 1. If one of the sequences converge, so does the other. 2. If one of the limits is unique, so is the other. 3. Limit 'respects' addition operator, so 1/3 = Q+R. 4. R is a non-negative number smaller than any positive rational number, in other words R is non-negative infinitesimal. For different number systems, we get different answers.

  • In rational numbers, it is easy to see that sequences converge and Q=1/3, R=0.
  • Real number have property/axiom of dedekind completeness. From this property it follows that all cauchy sequences converge to unique real number, and there are no infinitesimals other that zero. So in real numbers Q = 0.333... and R = 0.
  • In hyperreals positive infitesimals exist, but the answer depends on topology.
    • Hyperreals with 'order topology'. The sequences do not converge. Decimal notation 0.333... doesn't mean anything.
    • Hyperreals with 'coarse topology'. The sequences converge, but limits are not unique. Limits are unique up to infinitesimal. In this case '0.333...' doesn't represent a hyperreal, but a subset of all hyperreals that are infinitesimally close to 1/3.
    • If hyperreals are constructed as ultrapower , then we can replace topological limit operation with algebraic 'modulo ultrafilter' operation
    In this case 1/3 = Q + R, Q is a hyperreal number slightly smaller than 1/3, and R is a positive infinitesimal.

Tlepp (talk) 10:33, 11 May 2008 (UTC)

I'm no expert on hyperreals, but unless I'm mistaken, in the last case it depends on our choice of ultrafilter what hyperreal numbers we get for Q and R. And since there's no canonical choice of an ultrafilter, there's also no canonical choice of which of several numbers infinitesimally close to 1/3 to call "0.333...". Huon (talk) 12:56, 11 May 2008 (UTC)
It's also somewhat arbitrary to use 0.333... to refer to (0, 0.3, 0.33, ...) / U. We could just as well decide that it refers to (0.3, 0.33, 0.333,...)/U or (0.33,0.3333,0.333333,...)/U. This clearly demonstrates why decimal expansions aren't supposed to be used for hyperreal numbers. You can make it work, but it certainly isn't pretty. -- Meni Rosenfeld (talk) 13:10, 11 May 2008 (UTC)
In order to hopefully get a jumpstart on the inevitable hyperreal fans latching onto this as the solution to all their problems, could you please elaborate on the ways in which decimal expansions for hyperreals cause problems? For those without any knowledge of the hyperreals, saying "you can make it work but it isn't pretty" is too vague to serve as a deterrent to the 'aha, we should be using the hyperreals!' argument. What sort of problems with the structure arise with application of decimals to the hyperreals? Maelin (Talk | Contribs) 15:15, 11 May 2008 (UTC)
It's not as much a "problem" with mixing the two as it is the mere observation that decimal expansions were designed with real numbers in mind. Every decimal expansion represents a unique real number, and there is no doubt as to which one; every real number is represented by a decimal expansion or two. Thus decimal expansions are a useful encoding for real numbers. We don't have all this for hyperreals - any choice of a method to interpret decimal expansions as hyperreal numbers is arbitrary, and will always leave the vast majority of hyperreals with no decimal representation. Thus decimal expansions are useless as a tool to describe and investigate hyperreal numbers. -- Meni Rosenfeld (talk) 15:25, 11 May 2008 (UTC)
Here's one "problem" with decimal expansion of hyperreals
2 * 0.555... = 2 * (0, 0.5, 0.55, 0.555, ...)/U = (0, 1.0, 1.1, 1.11, ...)/U
= (1, 1.1, 1.11, 1.111, ...)/U - (0, 0.1, 0.01, 0.001, ...)/U = 1.111... + infinitesimal
So 2*0.555... is not exactly 1.111..., it is only an approximation up to infinitesimal. Another example:
1 = 0.999... + (1 - 0.999...) = 0.999... + (0, 0.1, 0.01, ...)/U = 0.999... + infinitesimal
With standard real numbers you can atleast theoretically do exact computations with decimal expansion. In practice you can only compute finite number of digits in finite amount of time. Tlepp (talk) 16:36, 11 May 2008 (UTC)

At last, someone who can talk sense! Why can't you just put what Tlepp said in the introduction rather then acting like hyperreals are forbidden knowledge corrupting our youth? Algr (talk) 17:56, 11 May 2008 (UTC)

Algr, you have no idea what hyperreal numbers are, and you have no idea what Tlepp said. Stop acting as if you do. -- Meni Rosenfeld (talk) 18:15, 11 May 2008 (UTC)
A more explicit answer: Because 0.999... is not a hyperreal number. If you disagree, just tell us which hyperreal number you think it is. Furthermore, if we use decimal representations for hyperreals the way Tlepp does, we will lose the ability to represent almost all reals (such as, say, 1/3 or e), making decimal representations pretty worthless (or when was the last time that you wanted a decimal representation for "a third minus an infinitesimal"?). Also check Tlepp's first example. "Common sense" would tell us that, since 1.111... has one "more" 1 than 0.555... has 5's, we have 2*0.555... < 1.111... - but with the hyperreals, the opposite is true. Huon (talk) 18:20, 11 May 2008 (UTC)
The article already mentions that things work differently is other number systems. That's all that is required for this article - a more detailed discussion about hyperreals can go in Hyperreal numbers. --Tango (talk) 18:40, 11 May 2008 (UTC)
Setting aside the "sequence modulo ultrafilter" operation for a minute, there are ways to do "native" decimal expansions within the hyperreals. See the paragraph 0.999...#Infinitesimals beginning "Non-standard analysis is well-known..." for the extended decimal expansion of 1/3. Melchoir (talk) 19:16, 11 May 2008 (UTC)
Huon, I think Tlepp's equation has a typo, since I don't see why that minus becomes a plus, lets wait for him to post again. But he just told you what hyperreals .999... could be. You have to define your number set first, and _then_ start interpreting digits and punctuation. If you do it the other way, you trap yourself in unstated assumptions. Algr (talk)
You're right regarding the sign, of course. My mistake, sorry. But neither Tlepp nor you have shown what hyperreal 0.999... is. Tlepp has shown how each ultrafilter U gives another candidate - if you and I choose different ultrafilters Ua and Uh, we get different numbers (0, 0.9, 0.99, ...)/Ua and (0, 0.9, 0.99, ...)/Uh. Both can equally reasonably be called "0.999...". They might be equal, but I have no idea how to prove or disprove that. What I can show is that there are sequences where the choice of ultrafilter does change the hyperreal number - suggesting that our two candidates for 0.999... needn't be equal, either.
Forget ultrafilters. (0, 0.9, 0.99, ...) and (0, 0.99, 0.9999, 0.999999...) represent different Hyperreals using the same ultrafilter. —Preceding unsigned comment added by 69.91.95.139 (talk) 14:40, 17 May 2008 (UTC)

Progress?

I agree that you should first define the number set and then interpret notation. But the "usual" number set (as in, the one sufficient for every real-life application, practically all of physics and most of mathematics) is the set of real numbers, not the set of the hyperreals. That's why our article concentrates on the interpretation of 0.999... as a real number. Huon (talk) 23:03, 11 May 2008 (UTC)

I think we are making progress here. Yes, .999... could mean several different hyperreals (or be meaningless) depending on what number set you are in, and I understand that hyperreals negate all sorts of useful mathematical tools and are not useful for most applications. The article should simply say this. But in terms of number sets, I question if "usual" means "appropriate". If someone says ".999..." out of the blue, you need to establish what number set they are thinking of. If they then say "A number that is almost, but not quite one.", well that is an infinitesimal so you need a number set that supports them. That rules out the Real set. You can still talk about .999... in the Reals, but you have to acknowledge at the start that the idea of "A number that is almost, but not quite one." does exist in mathmatics. So what I am saying is that the article should bring up hyperreals in the introduction, and give them equal weight to the Reals throughout. Algr (talk) 04:55, 12 May 2008 (UTC)

Let's be crystal clear: you're wrong about the hyperreals, and your proposal has an infinitesimal chance of adoption.
What is possible is to acknowledge that objects just less than 1 can be found in some number systems, AND that these are not unique and bear no relation to 0.999…. I'll give it a try. Melchoir (talk) 05:39, 12 May 2008 (UTC)
RE: 'If someone says ".999..." out of the blue, you need to establish what number set they are thinking of.' Furthermore, whenever someone says, "may I have 7 apples?", I immediately wonder, and ultimately ask, "to what base number system are you referring, fine sir?" Otherwise, how could I possibly deliver the correct quantity of apples???!!! Tparameter (talk) 07:41, 12 May 2008 (UTC)
To my knowledge, there are no sources at all denoting a hyperreal number less than 1 by "0.999..." (simply because there is no single hyperreal number less than 1 to denote so). Unless a source can be found, the article shouldn't talk about it - and even if there is, giving the hyperreals equal weight to the reals would violate WP:Undue weight. I like Melchoir's rewrite. Huon (talk) 09:10, 12 May 2008 (UTC)
I didn't mean to extend decimal notation to hyperreals. I built a mathematical model that can simulate student's misunderstanding. It makes an educated prediction on what some students might be thinking when they don't know how to correctly interpret decimal expansion notation. It shows that difference between the two interpretations is no more than infinitesimal amount. In fancier language, the difference between "right" and "wrong" is not real, it is very very small. We are splitting hairs. No one needs more than trillion significant digits anyway. Btw, may I have 10 cookies? Seven is not enough. Tlepp (talk) 16:16, 12 May 2008 (UTC)
Tlepp, I give you 10 cookies (mod 10).
Numerically, the misunderstanding may be minute, but it betrays a fundamental misunderstanding of the real numbers, which might lead to even larger errors. If people think that merely constructing a sentance guarantees something to exist, such as "the biggest decimal number smaller than 10", then that's a pretty big problem. It's led to fallicious arguments for the existance of God, fairly major developments in set theory, among other things. Saying that students are only slightly off is bad move when the cause may be much larger. Endomorphic (talk) 11:33, 15 May 2008 (UTC)
Yes, I agree. It's a pretty big problem if people think that merely constructing a sentence guarantees that something such as "the smallest number equal or larger than any number in set X" exist. It might lead to fallicious arguments about "0.999...". God save us from such ignorance. Tlepp (talk) 12:12, 15 May 2008 (UTC)

And more...

Actually 0.999... does not equal 1.

Proof 1: 1 is one (1) per definition, and 0.999... does not look like 1. Proof 2: 1 x 1'000'000 = 1'000'000 and 0.999... x 1'000'000 = 999'999.999... still not looking like 1'000'000. Proof 3: There is actually a number slightly less than 1 and bigger than zero (0) one number is 0.5. Another is 0.9 and 0.1. DavidM89 (talk) 23:54, 20 May 2008 (UTC)

Well done. Have a gold star. Oli Filth(talk) 23:56, 20 May 2008 (UTC)
Thanks! DavidM89 (talk) 23:59, 20 May 2008 (UTC)
Response to proof 1: 1 can be written many ways: 1/1, 2/2, cos 0, ln e, i 4, 2 - 1, 1e0, 12, and so forth. Another way of writing it is 0.999...; contrary to the intuition of many people, decimal notation is not a bijection from decimal representations to real numbers.
Response to proof 2: Same thing only 1million times bigger.
Response to proof 3: 0.0~01 is nothing close to 0.1 or 0.9.--Sunny910910 (talk|Contributions|Guest) 00:01, 21 May 2008 (UTC)
Proof 1: again... 1 does not look like 0.999... .

Even if you make the 9s infinite it will never be actually 1, just a little smaller than 1. That's like saying a deer is equal to goat just because they are very much alike. Isn't it?

Proof 2: Doesn't matter how big the it is... see Proof 1.
Proof 3: Would like to explain that? Read again and refrain... DavidM89 (talk) 00:10, 21 May 2008 (UTC)
Before wasting any more of your (or anyone else's) time, please read this page and its archives. Oli Filth(talk) 00:16, 21 May 2008 (UTC)
"Even if you make the 9s infinite it will never be actually 1, just a little smaller than 1." If you say this, you have to say how much smaller. If .999... < 1, then what is the value of 1 - .999... ?—Chowbok 00:26, 21 May 2008 (UTC)

Hi David, welcome to Wikipedia! While I agree with you that .999... shouldn't be assumed to mean 1, I'm afraid that those arguments are too flawed and simplistic. I suggest you read up on infinitesimals, real numbers, and hyperreals, as well as the other discussions that we have been having here. To the others here, I would like to point out that David is already describing an infinitesimal, just like anyone who first encounters .999... So is it really rational to assume that he is thinking of the Real Set? Algr (talk) 02:08, 23 May 2008 (UTC)

The real set is what most people use when they talk about numbers, so if he intended to discuss 0.9~ in another number system, he should have specified which one he was using. There are plenty of people who assume 0.9~ isn't equal to 1 in the reals, and he gave us no reason to assume he isn't one of them. (Besides, he didn't actually describe an infinitesimal yet.) Gustave the Steel (talk) 04:42, 23 May 2008 (UTC)
I believe someone just mentioned "folks who don't want to understand." What else do you think "it will never be actually 1, just a little smaller than 1." could possibly mean? You are just using the Real Set to blind yourself to an obvious question so that you don't have to answer it. Expecting David to define one precise hyperreal before you acknowledge that that is what he is describing is absurd - it is like creationists who insist that evolutionists must find the fossil of the exact animal that evolved in order to prove evolution. Algr (talk) 06:32, 23 May 2008 (UTC)
I don't expect David to "define one precise hyperreal" - actually I assume he never heard of the hyperreals before you mentioned them. By the way, his "proof 2" can easily be turned into a proof of the equality, since he asserts that 1,000,000*0.999... = 999,999+0.999... But since you keep bringing up the hyperreals, I expect you to tell me precisely: Do you believe 0.999... is a hyperreal, and if so, which one? Huon (talk) 10:18, 23 May 2008 (UTC)
It doesn't matter if .999... can actually be connected to a specific hyperreal or not. As long as the article is unable to recognize that the Real Set is a choice and examine the consequences of other choices, it will always read as willful blindness to real mathematical concepts. Why would anyone trust it? Algr (talk) 14:03, 23 May 2008 (UTC)
It does recognise that. It specifically says that other number systems can be constructed which give different results. It doesn't specifically discuss any of them because none of them are actually relevant. When people use decimal expansions they are invariably referring to real numbers. --Tango (talk) 14:18, 23 May 2008 (UTC)
And why should the article discuss hyperreals in more detail than it already does if 0.999... cannot be actually connected to a specific hyperreal? Why claim that people think of hyperreals when they speak about 0.999...? Are there any reliable sources discussing 0.999... in the context of hyperreals (or conversely hyperreals in the context of 0.999...)? Huon (talk) 16:13, 23 May 2008 (UTC)
Okay, whoa there, I think we moved out of David's comfort zone the first time someone mentioned Infintesimawhatsits or Hyperspacereals. Let's scale it back a bit. Two different descriptions can actually be talking about the same thing, for instance, y = 4*2 - 5 and y = <The Number of Corners on a Tirangle>. This happens all the time in math; just because two things look different doesn't mean they are. 0.999... and 1 are two things that look different but are really the same. Sure, it may be wierd and strange and counterintuitive. There was a time when some of the worlds most respected scientists insisted that aeroplanes were impossible because they'd be heavier than the air they were trying to float through. Endomorphic (talk) 13:19, 28 May 2008 (UTC)
Those scientists had clearly never seen a bird. -- Meni Rosenfeld (talk) 13:22, 28 May 2008 (UTC)
...or at least never bothered to weigh one. siℓℓy rabbit (talk) 13:26, 28 May 2008 (UTC)

Observation

Just my observation. I wonder if some of the arguers against are in reality trolls who truly believe otherwise. It seems to me as I read the discussions/arguments against over many months that they don't really change, but only the wording changes a bit. So, if drawing in new people to argue the merits of the article only means that new folks are are arguing the same old turf, what's the point except to argue.

I've been tempted many time to present the argument from my perspective but in each case I've been able to find that my new argument was only a rehash of a prior response. So, this page seems more and more to resemble Groundhog Day, not being fueled by folks who don't understand but by folks who don't want to understand.

But after all, I suppose that this is a wonderful place to have one vent their irrationalities in a concentrated place rather than scattered about in many and several articles ;-) --hydnjo talk 01:21, 23 May 2008 (UTC)

Well, there's one particular person who had been trolling this article, under different IPs, for two years or so (he's not active anymore) - and by now I have no doubt whatsoever that he is fully aware of the equality being correct, and just enjoys irritating people by making silly arguments to the contrary (and insulting people while he's at it). There were probably other false skeptics, some true skeptics going the trollish way to express their skepticism, and some people with legitimate questions. Personally, I try not to waste too much time on a person if I see they don't want to understand. -- Meni Rosenfeld (talk) 11:13, 23 May 2008 (UTC)
I doubt there are many (I won't say any) people that agree with the equality and just say they don't to be awkward, but I think quite a few of the people that disagree don't actually want to learn any better. It's a common feature with any topic - when people argue about something they generally just want confirmation that they're right (and will interpret pretty much anything as that confirmation), they're rarely open minded enough to accept that they could be wrong. --Tango (talk) 12:51, 23 May 2008 (UTC)
Sometimes people get distracted talking about the hyperreals, infintesimals, ZFC and suchlike, when none of it is really relevant, and accidently(?) scare away people who might have had genuine interest or concern. (I was about to add the Archimedean property, except that *is* relevant.) I feel this page could be better off if people did less writing and more reading. Endomorphic (talk) 13:28, 28 May 2008 (UTC)
Infinitesimals are also extremely relevant. Hyperreals are normally only brought up by people trying to use clever sounding words to make their nonsense more convincing. Once a discussion reaches ZFC, it's probably a lost cause - Godwin's Law of Mathematics, perhaps? --Tango (talk) 13:58, 28 May 2008 (UTC)
The only place infintesimals need to be mentioned in this debate is to say "These aren't the infintesimals you're looking for." I've never seen any good come from infintesimals in all the time I've watched this page. The least damage they can do is destroy every general sequence convergence result you'd like to be able to use (nuts to your decimals converging), probably taking out the algebra at the same time (you like to be able to devide, yes? not any more). The more common result is that people get the impression that they have some ground to argue from, when they don't. Responsibly used "infintesimals" are usualy just *small but finite* quantities waiting to be hit with a limit to zero. The actual existence of infintesimals is a pet hate of mine, because it causes so much confusion, breaks so much stuff, and doesn't add anything usefull. In any case, the 10c-c proof *doesn't use* anything other than what you need to define decimals in the first place, so 0.999... = 1 again. This is the first time I've noticed how ironic the "infintesimals" position is: if you invoke infintesimals then the loss of sequence convergence means you break all the irrational decimals, but 0.999... is still 1! That just made my day :) Endomorphic (talk) 17:06, 28 May 2008 (UTC)
Infinitesimals are brought up by the people disputing the equality. Almost everyone that disputes it thinks 0.999... and 1 differ by a non-zero infinitesimal (although they may not know the word), which makes infinitesimals extremely relevant to the discussion. You say the Archimedean property is relevant, and it's a property about infinitesimals, so how can it be relevant by infinitesimals not be? --Tango (talk) 17:42, 28 May 2008 (UTC)
"I don't believe it because of onion rings!" doesn't make them relevant to the solution. The Archimedean property is relevant only in the sense that it confirms that infintesimals aren't. It's not so much a about infintesimals as it is about *not* having infintesimals. Endomorphic (talk) 18:58, 28 May 2008 (UTC)
Endomorphic, you are confusing "some people misuse structures having nonzero infinitesimals to justify their misunderstanding of 0.999..." and "structures having nonzero infinitesimals are not the same as the real numbers" with "infinitesimals are bad". For example, surreal numbers have some importance in combinatorial game theory. Also, the hyperreal numbers and the surreal numbers are fields, so dividing by a nonzero infinitesimal is entirely possible. -- Meni Rosenfeld (talk) 19:19, 28 May 2008 (UTC)
Everything you say is correct, Meni, but although the surreal numbers are awesome (and useful!) this is a good example of exactly what I'm complaining about. Surreal numbers destroy not the field nor the ordering nor sequence convergence, but decimal representation itself. I'd call that a bit of a problem for an article on 0.999... This argments page should discuss 0.999... in a manner accessible to people who are having difficulty with the whole "0.999...8" fudge; that's our target audience. Sure, I'm guilty of lying to children, but I don't want to explain transfinite induction to someone having difficulty with a non-terminating string. Game theory could be added to my list of things we get distracted by. Endomorphic (talk) 12:00, 29 May 2008 (UTC)
I agree completely that decimal expansions in general, and 0.999... in particular, have nothing to do with hyperreals, surreals and the like. The correct response to "0.999... should be treated as a hyperreal" is "no, it shouldn't". But this doesn't mean that understanding some of the underlying concepts isn't relevant to the discussion. Sooner or later, any opponent will claim that 1 - 0.999... is an infinitely small number. Some will be satisfied with the mildly lying-to-childrenish reply "there is no such thing", while others would want to get to the root of the problem. You can only fully understand something if you call it what it is, and the name for those hypothetical infinitely small numbers is "infinitesimals". The next step is to explain that the real numbers, which we care about here, have the Archimedean property, which means that they have no nonzero infinitesimals, while for full disclosure mentioning that there are alternative structures without this property and emphasizing their irrelevance. If the person is truly willing to understand, it should take no more than a few additional steps to understand what it is exactly that he is confused about and clarify it. If not, well, nothing we can do will help, and the problem reduces to finding a good stopping point. -- Meni Rosenfeld (talk) 13:38, 29 May 2008 (UTC)

graphing this

Comparing a graph of y = 1 to a graph of y = .9 then .99 then .999 and then imagining that to continue forever it seems that .999... will always be less than 1. I understand the algebraic proofs and accept them but can not reconcile them with this graph in my mind. When talking with other students about this they say something like "at infinity the graphs converge". But obviously the graphs never converge, just as parallel lines never converge. Do parallel lines stop being parallel at infinity? Why should approaching lines that will never touch be treated any differently? Bunny138 (talk) 17:34, 27 May 2008 (UTC)

"They do that at infinity" is a common argument that is hard to get to the bottom of, but yes it is commonly considered that parallel lines do meet at infinity.

To respond to your graphing argument, yes the line you draw is always less than 1.0. But then the line is always less than 0.999... too (since at all points on the line there are always more 9s to add into the value). At all points the line you draw is less than 1.0 by exactly the same amount that it is less than 0.999... DJ Clayworth (talk)

For the parallel lines meeting at infinity, you might be interested in projective spaces. Huon (talk) 18:27, 27 May 2008 (UTC)
Consider the function y = -(1/x) + 1. This has a limit of 1 as x approaches positive infinity, so it never reaches one. It never reaches 0.9~, either - in fact, if 0.9~ is less than 1, then the function MUST cross 0.9~ at some point (and then go on to be between 0.9~ and 1), but this never happens.
The problem you're having is that you think of 0.9~ not really as a number, but as a process or a graph. As a result, you don't merely say "0.9~ is not equal to one" - you go on to say that it "will always be" less than 1. Have you ever heard someone say that 1 will always be less than 2, and that it will never equal 3? They're true, but it's kind of a funny way to talk about static numbers. Gustave the Steel (talk) 19:43, 27 May 2008 (UTC)

Why think about .999...?

"Science alone of all the subjects contains within itself the lesson of the danger of belief in the infallibility of the greatest teachers in the preceding generation....Learn from science that you must doubt the experts. As a matter of fact, I can also define science another way. Science is the belief in the ignorance of experts." - Richard Feynman - Algr (talk)

Very true, but we're not talking about Science, we're talking about Maths. Very different subjects. --Tango (talk) 22:57, 27 May 2008 (UTC)

Is there not a better forum for the discussion of random quotes? Tparameter (talk) 02:18, 28 May 2008 (UTC)