Jump to content

Talk:Doomsday argument/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1Archive 2

Major rewrite

I have taken the liberty of a major rewrite to the page. I think I've presented the basic argument in the most straight-forward way possible. A more formal version of the argument would have to be made using a bit of Bayesian analysis. I've taken the current cumulative human population to be 50 billion which is a compromise between Gott's 70 billion and the previous page's 20 billion figure. If people really don't think my version of the argument is any good then of course they're free to discard it (and replace it with a previous version if they want).

User:John Eastmond 4 Oct 2004

Taken out Singularity section

The only two things Heinz von Foerster's argument has in common with Brandon Carter's Doomsday Argument are the words "Doomsday" and "Population" and nothing else. The Doomsday Argument is a probabilistic argument based on cumulative population whereas von Foerster's argument is based on an extrapolation of a particular model of population growth.

User:John Eastmond 30 Nov 2004

The Onion extrapolated the survival of human culture a couple of years ago. They calculated that the earliest date pop-culture is nostalgic about is 9.5 years ago, and that every passing year reduces that by about 4 months, so that the "world will run out of past" circa 2030. The Onion's 'singularity' is probably a lot more credible than von Foerster's, and a better comparison to the probabilistic DA. My comments in the next section (on grouping the von Foerster singularity with this in a category) were meant as a reply to John Eastmond's point here. Wragge 00:56, 2005 Apr 29 (UTC)

Special Relativity and the Reference Problem

I've been thinking of putting the following paragraph in but I'm not sure about it. I'd be interested in any comments about it:-

There has always been the problem of which observers to include in the definition of "humans": the so-called Reference problem. Should we include just homo sapiens or should our definition include all "intelligent" observers together with any artificial intelligences we might create in the future?

Actually there is a more fundamental constraint on the definition of the class of observers arising from considerations of Special Relativity. The Doomsday argument asks us to consider our position within the chronologically ordered list of all human births. However the set of human births comprise a set of distinct "events" in spacetime. The order of these events along a "timeline" actually depends on the velocity of a particular observer's frame of reference. Thus different observers will have conflicting ideas about the chronological order of the birth events.

Perhaps the Doomsday argument can only be pursued in terms of the lifetime of the individual observer whose physical states form one continuous worldline of events. Such a worldline does have an invariant "proper" time associated with it. As each event is causally connected to the next in a chain of events there is no ambiguity about their chronological order.

Thus it seems that the only reference class that can be used is the set of days (say) that comprise the lifetime of the individual. When we wake in the morning our experience of "today" selects it from the set of N days that will comprise our life. As each morning awakening is equivalent to any other (apart from arbitrary details) then each day has the same prior probability. Thus the prior probability of "today" is always 1/N. One thus deduces that N must be finite in order that the prior probability of today is non-zero. Have we proved that an individual's immortality is impossible? :)

User:John Eastmond 20 Jan 2005

Well maybe. Assuming that the argument is correct, N cannot be infinite but it can still be boundless (in the sense that whichever finite value you choose for N, I can choose a bigger one and we can repeat the process without limit). You might say that the argument would allow us to rule out the possibility of immortality but not the possibility of living forever, one day at a time. If that makes sense, <grin>. -- Derek Ross | Talk 15:43, 2005 Jan 20 (UTC)

What do you think about the special relativity objection to applying the argument to the human race? Apparently a set of birth events in spacetime can only be said to have a time-order if it is possible in principle to send a slower-than-light signal from each event to the next. But this is an unnatural constraint which need not be realised at all. One could imagine the human race colonizing the galaxy. It could easily be the case that a birth event on one side of the galaxy cannot be linked by a slower-than-light signal to a birth event on the other side of the galaxy (in other words each is outside the "light-cone" of the other). If the Doomsday argument is applicable to populations of observers then surely it should be applicable to all populations regardless of the spacetime positions of the individuals' birth events? The fact that it isn't seems to show that it can only be applied to an individual's set of life-events (that are naturally causally connected and thus time-ordered).

User:John Eastmond 21 Jan 2005

I'll need to think about that, John. However note that this sounds suspiciously like original research and may therefore be irrelevant to this Wikipedia article. -- Derek Ross | Talk 05:46, 2005 Jan 24 (UTC)

You're right - these are original ideas and therefore should be published elsewhere.

User:John Eastmond 2 Feb 2005

New "Other Versions" sub-section

Henrygb suggested that I make the choice of sampling variable point more explicit, so I've added a new sub-section: "Sampling from observer-moments" under "Other Versions" that details an alternative f distribution, by a uniform sampling over (life-span * n). This includes the earlier reference I made to Bertrand's paradox (probability), but I now directly link to the definition.

Unfortunately, some of the Anthropic subjects I refer to in this section aren't defined yet as Wikipedia pages. Rather than make red links I've added cross-references to discussions of these topics in other articles. Is this better style than adding red-links if those red-links already exist (on Anthropic bias)?

I am concerned that this section might be too long, but I wanted to give a full description of this argument. Is it too wordy, or still not explicit enough?

I added the sub-section to "Other Versions" partly because that only had a single subsection. Is this the appropriate place?

Anyway, it should act as a stub for extension of the function-form side of the definition. Talk:Doomsday argument/Archive 1#Why is N=# of humans? relates to this.

Wragge 18:40, 2005 Apr 8 (UTC)

Infinite Expectation

Exactly, it is not a hoax, but its name and the way of its presentation are very unfortunate, especially the calculation of the doomsday date. The fact that we have lived here for 3 billions years (counting whole evolutionary line from microbes) does not mean that we have a recipe for survival for another 3 billions years, which is counterintuitive. In real world, if a stone has been lying somwhere for 10 years, we can infer that the stone is in a stable place and expect it to lie there another 10 years in our reality. But in case of unique stuff, such as physical constants or existence of our race for time T in past, anthropic principle says that we do not know which alternative reality we are in, apart from the trivial fact that it is one of those we that has allowed our existence for certain time. The method of multiplying 60 billion by a constant, such as 2, to get expectation of total number of human beings assumes that alternative realities are "normal" in some way, e.g. they do not usually have half life of 1 second until totally blowing up.

the main point of the argument is NOT that the human species may become extinct. (section Numerical estimate of Doomsday)

Hello, in the "Numerical estimate of Doomsday" (the first section), it says:

"N is the total number of humans who will ever be born"

and at the end of the section, it says:

"The argument predicts, with 95% "confidence", that humanity will disappear within 9120 years. Depending on the projection of world population in the forthcoming centuries, estimates may vary, but the main point of the argument is that the human species may become extinct."

However, the main point of the argument cannot be that the human species will become extinct, because this has been assumed at the beginning ("N is the total number of humans who will ever be born"). The main point of this particular section is the actual numerical estimate, I think.

Why does this have it's own page?

Really, put it under Brandon Carter's page or whoever came up with it. None of this is proven fact or even based on facts, it has no place in an encyclopedia except as a mention under the page of whoever came up with the argument. If I come up with a statistical argument about the likelihood of a piece of toast landing on the side with jam, I don't expect it to have it's own Wikipedia page. It's garbage — Preceding unsigned comment added by 86.43.161.70 (talk) 22:37, 28 April 2012 (UTC)

arXiv.org article claims flaws in Gott's rule

http://arxiv.org/abs/0806.3538 Carlton Caves: "Gott has promulgated a rule for making probabilistic predictions of the future duration of a phenomenon based on the phenomenon's present age [Nature, Vol. 363, 315 (1993)]. I show that the two usual methods for deriving Gott's rule are flawed. Nothing licenses indiscriminate use of Gott's rule as a predictor of future duration. It should only be used when the phenomenon in question has no identifiable time scales." —Preceding unsigned comment added by Jfaughnan (talkcontribs) 02:40, 8 July 2008 (UTC)

Species extinction has no identifiable time scales, so it would qualify as an appropriate use of the rule even by that statement.Warren Dew (talk) 18:59, 15 August 2009 (UTC)

Irrelevant

We don't care whether the "Doomsday argument" is sensible or not folks, we only care whether or not our article correctly describes it.

It seems that it CANNOT be accurately described if it has been postulated by anyone with a reasonable sense of statistical inferencing and schooled in logic. The Doomsday argument is patently silly. —Preceding unsigned comment added by 65.126.123.226 (talk) 13:42, 27 April 2008 (UTC)

Simplistic view

In view of the lively debate conducted by many smart people, we would be wrong to believe the simplistic view that the problem is just caused by an incorrect use of probability.

I would add: there are results of probability that are counter-intuitive, and yet correct. E.g. the Birthday paradox. The challenge with the doomsday argument is to find where it is incorrect, and we can't just say it is incorrect because it is counter-intuitive. User:Pcarbonn 18 Apr 2004

Actually we would be right to believe that the problem is caused by a simplistic use of probability. Take:
Assuming the following (held out to be valid computations of probability):
  • If you pick a number uniformly at random in a set of numbers from 1 to N, where N is unknown to you, and if we name that number j, then:
    • Your best estimate of N is 2 × j,
    • You can say with 95% confidence that N is between 40/39 × j and 40 × j.
2 × j is not your best estimate of N, it is an unbiased estimate - but a requirement to be unbiased often produces strange results. The maximum likelihood estimate of N is j, which makes the doomsday argument worse. Your best estimate of N will depend on your prior probability distribution for N (and your loss function); assuming an improper prior that all values of N are equally likely and even after seeing j, your expected value for N will be infinite.
Similarly you cannot usually say with 95% confidence that N is between 40/39 × j and 40 × j. You might be able to say that if N is not between 40/39 × j and 40 × j then the probability of seeing j or a more extreme value is less than 5%, but that is not the same; as a statement about N it confuses probability with likelihood. Again, any statement you want to make about N depends on your prior.--Henrygb 13:01, 28 May 2004 (UTC)

Reductio ad absurdum

Not sure if someone else has already addressed this, but wouldn't the type of reasoning here also imply that there is a 10% chance that we are in the last 10% of humans born, and thus n/N > 0.9, implying N < 1.11n, so that the total number of humans born would be less than 66.6 billion. In other words, there would be a 10% chance that, assuming population holds constant, humanity will go extinct within 90 years. Miraculouschaos 03:26, 10 May 2006 (UTC)

What is so absurd about that? — Preceding unsigned comment added by 190.101.103.88 (talk) 18:00, 28 November 2011 (UTC)

Begging The Question

Something that doesn't seem to be addressed in the article is the objection that the Doomsday Argument is circular reasoning, or begging the question. It assumes that N is finite in making its assumption. It is not valid to say that there is a 95% chance that we're going to be gone in ten thousand years unless there is a 95% chance we're past the first 5% of people. This assumption is only valid if there is reason to believe with nontrivial certainty that N is finite.

I do actually believe that N is finite, but my personal opinion is irrelevant. Consider that if N is infinite, there is by definition a 0% chance that we're beyond the first 5% of people, because one tends not to run out of inifinities.

The "What the Doomsday Argument is not" section addresses that it's not a guarantee we're going to die out, but still asserts the 95% chance that we will. This is not correct. It may be true that there is a 95% chance we have less than ten thousand years to go if and only if we will ever go extinct, and even that's quite questionable for reasons stated in the entire remainder of the article, but as it is, it's presenting itself as if it's claiming that there's a 95% chance we will die out and it will happen in that period of time- which is not correct.

I'm not making myself clear enough, which is why I haven't touched the article. Somebody ask questions so I make sense, please? --Kistaro Windrider 05:35, 31 July 2005 (UTC)

Probability distributions must be normalizable. If one has a choice between: (1) There will be at maximum a finite number N of human beings or (2) There will be an infinite number of human beings (i.e., N is unbounded) then one cannot capture both of these in a single normalizable distribution. One would have to go to a mixture model in which a certain probability is put on possibility (1) and the rest on possibility (2). The distribution under (1) is normalizable. Under possibility (2), the human race doesn't go extinct, but the probability of there being a total of n humans is not normalizable. I do not see how you calculate anything useful under (2), but a Bayesian calculation might be possible. One would require assumptions that are not displayed in the usual Doomsday calculations. Bill Jefferys 00:04, 1 August 2005 (UTC)

60 billion??

Some scholars have used an estimate of the running total of the humankind population, (perhaps 60 billion people) - this is an incorrect figure. Something like 60-70% of people who have ever lived are still alive, putting the running total of humankind closer to around 10 billion. (If this figure seems shocking, remember that the human population has grown exponentially with doubling period signficantly shorter than human lifespan.) Anyone got a better figure? Mat-C 19:40, 3 Jul 2004 (UTC)

I don't have a better figure, but just following your logic (6 billion people now, having doubled every two generations of 25 years or so) backwards seems to put Adam and Eve at around 400 AD. Not even young-earth creationists have quite that time scale. The article Evolution of Homo sapiens suggests something of the order of 250,000-400,000 years ago for the first modern humans, and 2 million year for the first of the genus. --Henrygb 17:15, 20 Jul 2004 (UTC)


From _A Concise History of World Population_ by Massimo Livi-Bacci

[1]


total historical world population is:

Before 10,000 BC: 9.29 billion

10,000 BC to "0": 33.6 billion

"0" to AD 1750: 22.64 billion

AD 1750 to AD 1950: 10.42 billion

AD 1950 to AD 1990: 4.79 billion


Total: 80.74 billion


--80.188.210.180 13:08, 19 Aug 2004 (UTC)

Added Many-Worlds section

This is my proposed solution to the Doomsday Argument - hope that's ok.

User:John Eastmond 16 Nov 2004

Sorry, but I'm afraid it's not. Wikipedia is for writing about pre-existing knowledge, not for contributing original research or theories. Additionally, I find your argument unclear and unconvincing, but this isn't the issue that matters. Write up your theory somewhere else, and it will be mentioned if someone else finds it worthy of being in this encyclopedia. -- Schaefer 05:58, 19 Nov 2004 (UTC)

Fair enough

User:John Eastmond 19 Nov 2004

Dispute

IMO all info should stay. I only removed some references to the history of singularity which seemed unneccessary (they should be restored to the singularity if they are not there already). John Eastmond is not the only person to consider the particular interpretation he presents, and there is no rreason to remove it. [[User:Sam Spade|Sam Spade Arb Com election]] 16:50, 30 Nov 2004 (UTC)

The Singularity information could be made to complement the Doomsday argument page, but John Eastmond's point seems logical; it would be better to split tangentially related subjects across different pages (by name if possible). I don't know if there is a category like "Doomsday predictions (secular)" but if not it should be created (within Eschatology?) and used to group together this page and Heinz von Foerster's Singularity prediction page. Other proposed category members:
  • technological singularity
  • Doomsday clock
  • mean-time-to-asteroid apocalypse calculation
  • the Victorian calculation of the survival probability of any surname to infinity.

Furthermore, the page is too long now. I propose a [[Category:Doomsday argument refutations]] category, which will organize these and prevent this page getting too long to present a simple definition of "the Doomsday argument".

Wragge 10:05, 2005 Apr 22 (UTC)

Layman edition needed

I hate equation and did not understood the process. Could someone explain with metaphor, idea, examples, instead of maths. The article should be amended accordingly. [[User:Vrykolaka|Reply to Vrykolaka]] 17:31, 24 Jan 2005 (UTC)

I think you've got a point there. I'm happy for anyone to rewrite the article using examples and thought-experiments.

User:John Eastmond 2 Feb 2005

I added a section with a simplified example. I realise the example comes from a refutation of the argument, however it did help me to better understand the argument itself. If it does not do justice to the argument, please tell me.

Oops, sorry, forgot to sign! UnHoly 03:39, 14 Feb 2005 (UTC)

Why is N=# of humans?

There seems to be some people who intensely studied this argument here, so I have a question. Why is it that N was chosen to be the number of humans? For example, I could make the same argument by using N as the numebr of year since humanity has been discovered, and I will get a different results because the number of humans who ever lived is not linear in time.

In fact, I could also make the same argument using a totally arbitrary variable. For example, I could say that N is the total number of humans who ever lived, weighted by close in time from the dawn of humanity they were born. Then present-day human would contribute for next to nothing to N.

My point is, prior to computing the probability, one should be convinced that the method will yield a probability that has meaning. In this case, there should be an argument that applying this formula yields an estimate for the lifetime of humanity, and that N should be taken to be the number of humans. I am not convinced.


UnHoly 01:05, 6 Feb 2005 (UTC)

The Copernican principle was cited for the reason to use N as number of humans. It is not equally likely to be in any year because the years are distinguishable and different in whether a soul prefers it or not (different amounts of humans are born every year). However, it is equally likely to have been any human because embryos are indistinguishable in whether a soul prefers it or not. This argument is based only on the position number of the soul among the humans, not on the time the soul entered the world. Another argument could possibly be formed using your N, and that would also provide a 95% confidence interval (meaning 95% of intervals produced with your method would be correct, just like 95% of intervals produced using this doomsday method would be correct). However, since both intervals have equal confidence, and this interval is more limiting (has a shorter range), it is more useful to use this interval. 98.110.54.24 (talk) 00:33, 6 May 2010 (UTC)


I'm beginning to think that there is a problem with the argument when applied to the human race. It seems to make the tacit assumption that the eventual chronologically ordered list of all the human beings ever to live could have had a different order. This is not possible. If I swapped places with one of my ancestors so that I was born in his time and place and he in mine then I would *be* him and vice-versa - there would be no change. If one cannot cannot consider such permutations as possible then I think that one cannot argue that one is equally likely to find oneself at any particular position n.

Normally one would say that given that there will be N humans then there are N! (i.e. N * N-1 * N-2 * ... * 3 * 2 * 1) ways that such a list of humans could be ordered. If we assume that we are at any position n then there are (N-1)! ways in which the other humans could be placed around us consistent with us remaining at position n,. Thus the probability that we are at any position n is given by (N-1)! / N! = 1/N i.e. that, a priori, we are equally likely to be at any position n within the list. But if it is not possible to permute a set of humans along a timeline then we can't derive this uniform prior probability distribution for our position. Without this uniform distribution the argument can't get off the ground.

John Eastmond 17:15, 14 Feb 2005 (UTC)


Well, I am not sure this answers my question, but it touches some of the same points.For exmaple, if you take N to be the lifetime of humanity, it is an ordered set (1881 is not 1534), while if you take N to be the number of humans it is not an ordered set. Then both these possibilities would yield different answers, while they should be the same, since there is a perfect correlation between the number of humans and the numebr of years (we know how many humans were born every year). UnHoly 22:45, 14 Feb 2005 (UTC)


Indeed, the reference class is a hugely important factor in this problem. I would argue that the reference class has exactly one member: me, right here, right now. Assuming that the world has some sort of consistency with my memories (I hope!), then I can only "find myself" in the place and time that I'm in, or one with imperceptible differences. Of course, now it becomes a question of metaphysics, time, and identity, but I am sure I can resolve it in a way that leaves everyday life on a firm footing, and that's what matters. --nanite 142.207.92.56 23:53, 27 Apr 2005 (UTC)

It's certainly true that the Doomsday argument leaves 'everyday life' on a firm footing (infact, a firmer footing than without it, by Gott's estimates). It calculates a very low chance of extinction within the lifetime of anyone reading this, and if thats 'what matters' then the argument is irrelevant. I can only speculate on why many researchers consider the problem worth thinking about; maybe because they want to challenge ideas of permenance, or because they are worried about their descendents.
I agree, that in metaphysical terms the only thing that can ever matter is the subjective experience of the (short lived) individual. Widening the reference class beyond the individual is questionable, although Gott's "Copernicus method" approach is agnostic: He's not really invoking a reference class of more than one. All he says is that within the single reference class of your individual lifetime the things you see will probably be typical (a tautology). Wragge 10:44, 2005 Apr 28 (UTC)

A query for Henrygb

Henrygb, I don't understand. In response to this:

If you pick a number uniformly at random in a set of numbers from 1 to N, where N is unknown to you, and if we name that number j, then:

Your best estimate of N is 2 × j,
You can say with 95% confidence that N is between 40/39 × j and 40 × j.

You disagreed, saying first that:

The maximum likelihood estimate of N is j,...

Now, before we start getting all Bayesian, this already strikes me as odd. I can only think that there are two (or more) different understandings of the case. Assume that your situation, as the random selector of the number, is as follows: You know that you are to select with uniform probability (p=1/N) just one natural number j from a set whose lowest member is 1, whose highest member is N, and whose other members (if any) are all the natural numbers that are larger than 1 and no larger than N, but you have no information about the value of N. In this case, surely your best estimate, or maximum likelihood estimate, for N is not j. How could it be? It must indeed be 2 x j. (If not, why not?) But then, what is the alternative understanding of the case that you are working from, if your answer is j? Please explain. --Noetica 14:12, 17 Feb 2005 (UTC)

OK, let's work out the maximum likelihood. The probability of getting j given N is
(1/N) × I(1≤j≤N) where I(.)=1 if the inside (.) is true and =0 otherwise
This is also the likelihood of N given j.
It is obvious maximised when 1/N is maximised and I(1≤j≤N)=1,
i.e. when N is minimised subject to the constraint N≥j,
i.e. when N=j.
The likelihood when N=j is 1/j, and when N=2j is 1/(2j) and the first is higher.
Clear enough? --Henrygb 10:12, 18 Feb 2005 (UTC)

Thanks for your response, Henry. But alas, it doesn't seem to me that my question (presented in non-symbolic and non-formulaic terms) has been answered. A big part of this is that I don't understand all the symbols you have used. If you have still more patience, you can help me (and possibly others here who are interested in the Doomsday Argument) by addressing the following. (But this will only work if you communicate in something other than the specialised symbolic manner appropriate to well-practised experts in your field.)

I take that it that maximum likelihood can deliver, in practical terms, a best bet. (Am I right?) Now, suppose that you have a situation isomorphic to the one I carefully described earlier, but with numbered marbles to select from an urn. You are told that there are exactly N marbles concealed in the urn, that each marble has been uniquely marked with one natural number, and therefore that each natural number from 1 to N is represented by exactly one marble. Beyond this, you have no information about N: you know only that it is some natural number (not excluding 1). You select just one marble from the urn (with all marbles having an equal probability of being chosen). The number on your marble turns out to be j. You are then given a free bet concerning the value of N.

If the above is not isomorphic to what I outlined earlier, please tell me how it is not. Then say what your best bet is for the value of N. My bet would be 2xj. Would yours be j? If so, that seems bizarre to me. It would mean that you think it more probable that the marble you happened to select was the highest-numbered marble than that some marble numbered j+1 (for example) was the highest-numbered. Why? What am I missing? Extra points will be awarded for a response confined to plain but precise English. Thanks again. --Noetica 13:27, 18 Feb 2005 (UTC)

  • "Best" is not in itself a well-defined word. So you need some kind of criterion to judge it. You seem to find bias bizarre and a reason for rejecting a method. Fair enough. But you have to recognise that choosing an estimate which has a lower likelihood than another can be seen as peculiar by others, especially if you have a free bet which I assume pays off only if you guess N correctly. The argument goes that the smaller N is, the more likely you are to choose the jth marble, providing that there are at least j marbles in total. So in your free bet you may be wise to guess that N is j. And indeed this is correct and entirely logical, unless there is some external reason why you think N is more likely to be a particular high value than a particular low value.
  • Try it yourself by getting you favourite spreadsheet to chose lots of different Ns (so long as no particular high value is more likely than a particular low value), then in each case choosing a random j from 1 through to N, and then seeing how often N is equal to j or to 2×j.
  • Note that 2×j will never be correct when N is an odd number. If you restrict Ns to even numbers then guessing j or j+1 (so your guess is even) will still be better than guessing 2×j.
  • Having said all that, my personal view is that you would do better to have some view of how N is chosen (a prior distribution), and to understand what counts as a good guess (a loss function). But this points at Bayesian methods, rather than unbiased or maximum likelihood arguments from ignorance. --Henrygb 16:54, 18 Feb 2005 (UTC)

Ah yes! I came to the core insight into all of this by myself, a couple of hours after posting what I did, Henry; and now your careful reply confirms and expands things for me. As you suggest, "best" is not as well-defined as I had assumed. In practical terms, given that the bet I speak of pays off only if I get N exactly right, then I should indeed bet on j, and not on 2 x j. I was mistaken. I have now re-found my marble.

But my remaining concern is with the relevance of this to the Doomsday Argument. You write (see your first comment on this page):

2 × j is not your best estimate of N, it is an unbiased estimate - but a requirement to be unbiased often produces strange results. The maximum likelihood estimate of N is j, which makes the doomsday argument worse.

Now, here you yourself use "best" imprecisely (which may have contributed to my own uncertainty). With the Doomsday Argument, isn't an unbiased estimate of N what we're after, rather than a maximum likelihood estimate? To put this in practical terms with bets concerning the marbles (and assuming replacement, for selections by other bettors): if offered a free bet that paid off only when your estimate is no further from the true value of N than some competitor's estimate (which is unknown to you), you'd bet on 2 x j, wouldn't you? Isn't the task in Doomsday to bet on a year that maximises your chances of winning this sort of bet? Picking the exact year of extinction is not the task. So 2 X j seems best for our purpose, and I can't yet see that this is a "strange result". Will you help once more, without yet getting Bayesian (which may in the end be best, I agree)? --Noetica 21:42, 18 Feb 2005 (UTC)

If I was betting against other people, I would try to get into an area where I thought other people were not betting so as to maximise my chance of winning. But that is a different game, and there are plenty of paradoxes involved in such betting games.
What I was trying to say at the top of the page was that the Doomsday argument does need Bayesian analysis and some assessment of probable patterns of populations and extinctions. I believe that traditional statistical methods often produce a nonsense, and this is an example. We do have information which might inform the Doomsday argument: we think we know something about past extinctions of other species. We also have little evidence that future population trends are driven by what has happened in the distant past; the present (and perhaps recent past) may be all that matters. And there is a change of scale problem too: the Doomsday argument is based on human population numbers, not length of time the human species has existed (which would give much longer estimates for the remaining time). But this is strange: what it says is that the faster population grows, the sooner we will become extinct. Yet the evidence is that it is species whose populations are in decline which become extinct faster, and that ones which are distributed worldwide with a growing population which survive longest. Ignoring all this is just daft and in my view invalidates the argument. --Henrygb 23:55, 18 Feb 2005 (UTC)

Thanks once more, Henry. I agree with a lot of what you say. I also share your reservations about betting games, and that is why I sought to word things carefully: "...a free bet that paid off only when your estimate is no further from the true value of N than some competitor's estimate (which is unknown to you)...". Though this doesn't quite do the job, such games are often a good way to get concrete and clear about things. I'll try a simpler variant on you:

In the marble set-up as described above, with just one selection, you will be fined |N-(your estimate of N)|x$100. What do you give as your estimate of N?

If I were motivated solely to minimise the expected value of my fine, I'd give 2 x j. What would you give, with the same motivation?

I put it to you that, for as long as we are going to resist the entirely necessary Bayesian analysis, this 2 x j estimate is the most apt and relevant one in the case of the Doomsday Argument. And it is probably a good idea to get clear about the non-Bayesian way of viewing things (which may not deliver "nonsense", as you have it, but merely a less accurate result) before proceeding to the Bayesian. --Noetica 01:04, 19 Feb 2005 (UTC)

How can you resist something necessary?
You give a particular loss function, based on the absolute deviation. So the answer must be the median of the posterior distribution, though you will need a prior distribution to work that out. --Henrygb 01:59, 19 Feb 2005 (UTC)

To be more precise, then: "...for as long as we are going to resist the indisputably preferable Bayesian analysis,...". And I put the following to you again, still seeking an answer that will not appeal to Bayesian notions:

In the marble set-up as described above, with just one selection, you will be fined |N-(your estimate of N)|x$100. What do you give as your estimate of N?

If I were motivated solely to minimise the expected value of my fine, I'd give 2 x j. What would you give, with the same motivation?

--Noetica 02:16, 19 Feb 2005 (UTC)

  • I wouldn't play the game. It is not much better than not being told j and simply having to guess N and face a fine for getting it wrong. What makes one game more acceptable than another?
  • Seriously, you haven't given a motivation for your guess. Unbiasedness isn't justified by the structure of the fine. Suppose you knew that N could be any number from 1 to 6 with equal probability, and you were told j=4. Would you guess 8, just because 2×j is the unbiased estimator? Yes and you are throwing money away; no and you concede that unbiased is not particularly desirable.
  • There might be a Bayesian way 2×j can be justified, I think for example if you use an improper prior where the probability of N is proportional to 1/N and where the fine was proportions to (N-(your estimate of N))^2. If you take same improper prior and your fine structure then 2×j−1 might be better. But there is no particular justification for that prior; why not use an improper prior where the probability of N is proportional to 1 and get an infinite estimator? One piece of data is not enough to stake your wallet on.--Henrygb 21:20, 19 Feb 2005 (UTC)

As I present the example with the fine, "not playing the game" is no option for you. How could we think it is? No one would volunteer to be in a game in which the best outcome is a fine of $0, and there are many outcomes that are ruinous!

I dispute this in your analysis: "It is not much better than not being told j and simply having to guess N and face a fine for getting it wrong." That is quite a different case, as any attentive examination of the way I set things up will show. You continue, a little further on: "Suppose you knew that N could be any number from 1 to 6 with equal probability, and you were told j=4." But this is to ignore the details of the case as I set it up, in which all you know about N is that it is some natural number. If you alter your prior epistemic situation so radically as you propose, of course the estimate 2 x j will not be good!

The rest of your reply goes into Bayesian matters that I am not concerned to address.

To summarise (from my point of view, at least), I originally asked you to explain something in your mention of maximum likelihood estimates. I am satisfied with this, and I thank you for helping me to clear up a confusion I had. But I say that maximum likelihood estimates are not as relevant to Doomsday as unbiased estimates are. While you were happy to say something about maximum likelihood estimates without wheeling in Bayesian notions as a matter of course, I have not succeeded in pinning you down to a similar statement regarding unbiased estimates, in a well-described situation. Nevertheless, I have now gathered all the information I need (including some meta-information about what virtuoso Bayesians are ready to commit themselves to, perhaps!). I have no further questions at this stage. Thanks very much indeed! --Noetica 22:39, 19 Feb 2005 (UTC)

---

Argh, <insert expletives here>. I spent ages tonight thinking about this, and in the end thought I had come up with a great and novel solution. Except now I find it's already been written by Olum. Oh well... Anyway, I'm going to add it, since it's not original research after all. :-) Evercat 02:55, 24 Feb 2005 (UTC)

Did I miss something?

When there were two people, there was only a 0.00000000003 probability that there would ever be 60 billion people. Why doesn't this matter?--66.65.67.135 20:05, 4 Mar 2005 (UTC)

In one sense it does matter, and the Doomsday argument says it will be wrong for 5% of humans, including with hindsight Adam and Eve. It then says that you have a 5% chance of being in that 5%.
In another sense it exposes one possible flaw in the Doomsday argument, as there is no particular reason to suppose that the past of humanity affects the future, except through the present. So the prospects for humanity do not depend on whether you are human number 60 thousand million or human number 60 million million. All that matters is that you are here now and the world is as it is (see martingale). --Henrygb 14:44, 23 Mar 2005 (UTC)

Principle of indifference

I've added a couple of references to the Principle of indifference, since this is a crucial assumption in the linear distribution of n. Another reason I added the reference is that I know the Doomsday argument as the principle of indifference (I came across it by that name, but probably I'm unusual).

I'm not sure if I should have made the second reference (under 'Bertrand') to the article's section on the Bertrand Paradox.

The Principle of indifference#Application to continuous variables alludes to the question of whether the 'correct' measure is 'humans born', 'years of human civilization', or '(humans born) * (self-aware life-span)'. Each measure gives a different 2-standard-deviation estimate for N. Although the explanaition is fairly clear in the link, it is titled 'Application to continuous variables' which 'humans born' definitely is not.

Should this second reference be fleshed out or removed? (unsigned by User:Wragge 21:52, 7 Apr 2005)

I suspect that you should make your second point more explicit and if you want to, include the link to Bertrand's paradox (probability) there (Bertrand was not a Bayesian as far as I know), and seperate it from the Bayesian point which is slightly different. I wouldn't mention continuous variables at all, which probably should not be mentioned that way in the Principle of indifference.
The Bayesian point is that although some use what they call , this is not quite the same as the Principle of indifference and even among those who do there is some debate about which is best. But for other Bayesians even such ideas are not acceptable: imagine coming across a tent in a field which you know has been there a day. Would you assign the same personal probability to it being there in a year's time as you would to a brick building which had been finished the day before? I would not, because I have lots of prejudices and indirect information which suggest to me that canvas lasts a shorter time than brick. Ignoring might be seen as irrational. --Henrygb 23:59, 7 Apr 2005 (UTC)

Thanks for the advice, Henrygb. If your prior experience of unsigned Wikipedia comments has lead you to a high-confidence inference of user-inexperience I can provide more confirming data, as this was my first discussion page post.

I agree that its important to be clear, so I've removed the second reference to Principle of indifference. I think that Bertrand may be an example of a economist who has utilized Bayesian probability, but who would have questioned either of the strong inferences necessary to produce a U(0,1] f distribution. I could be wrong, but would suggest adding an example of the 'many Bayesians' who question this step, and why they do so.

I would say that a condition of the Principle of indifference is that either: (1) no priors exist from which inferences can be made, or (2) The existing priors don't argue for differing state probabilities. Since we have no knowledge of previous similiar civilizations (to homo sapien culture) we might be in the first case with the Doomsday argument. By this line of reasoning the 'uninformative priors' analogy you make between this case and brick houses/tents is a false analogy, because we have personal experience of many brick houses and a good feel for how long they tend to stay up compared with tents. I would feel a lot more confident of a high N estimate if we have experience of several multi-trillion population civilizations (via SETI or pre-historical archeology, say).

The low N "pessimists" could plausibly argue (from the Fermi Paradox) that the very lack of priors is itself evidence which should bias our 2-sd N estimate downwards from the Principle of indifference value of 20n. However, I think we should try to separate these arguments across the relevant entries. What we can agree upon (I think) is that no relevant 'confirming' or 'disconfirming' evidentiary priors exist for comparable intelligent species population. We are then left with the the subjective 'prior' probability of logical extrapolation, which we should combine with the lower boundary evidence of N (that is, n) via Bayesian statistics to produce a 2-sd N estimate. Under what conditions could the forward-looking priors by powerful enough to produce an n multiple above (say) 50 with 95% confidence?

Caves' rebuttal, quoted at the end of the article seems to rely on the same analogy as the one you make between houses and civilizations. In my opinion it is a misleading analogy since in the first case (human lifespan) we have ample actuarial data, but in the second (civilization survival) we don't have a single complete data point. There is a lot more to Caves' arguments than this element, but I would like to express the opinion that quoting this example will tend to confuse the 'Doomsday argument' definition. Unfortunately, I can offer no better quote from his paper except his conclusion: "the Copernican principle is irrelevant to considerations of the survival of our species".

I acknowledge that 'priors' can relate to inferences not drawn from frequencies (in this case they will have to be logical projections) but I feel that the distinction between evidence-based 'priors' and extrapolation should not be confused by a comparison to a case where evidence is readily available. The precise point of the Gott thesis is that unprecendented cases do not present meaningful evidence, and hence that the principle of indifference is applicable. (For instance, the Berlin Wall was not really comparable to any previous structure when he visited it, and this enabled him to produce his estimate of its survival time.) Wragge 01:43, 8 Apr 2005 (UTC)

NPOV?

The Many worlds section starts: "The problem with the argument might lie..." Reading this I would feel that Wikipedia has the POV that the Doomsday argument is flawed. Is this a valid interpretation? Would we have a more NPOV if this section started: "The argument implicitly assumes..."

The Caves' rebuttal section does not have a NPOV; it says: "uses Bayesian arguments to show that...". This is disputed claim, which would be more neutrally described as: "uses Bayesian theory to argue that...". Wragge 18:55, 2005 Apr 8 (UTC)

Yes, those changes would probably improve the NPOV of the article slightly. If you feel them to be worthwhile, I doubt that anybody will object. -- Derek Ross | Talk 02:31, Apr 21, 2005 (UTC)

Thanks for confirming what I had thought, I just wanted to clarify how NPOV should be applied. The great thing about Wikipedia is that objections can be made at any time, but I will take this as confirmation of my POV about NPOV, and probably make the changes soon. Wragge 09:57, 2005 Apr 21 (UTC)

Why the pictures?

I'm not really sure the pictures add anything to the understanding of the article Jackliddle 17:28, 6 Jun 2005 (UTC)

I added the pictures, and I agree they don't add anything to the explanation, but Wikipedia guidelines recommend adding pictures to all articles, even if only connected in an abstract way. If you have better ideas for images that would enhance comprehension, or be more appropriate, please add or suggest them.
Thanks for reading, Wragge 17:41, 2005 Jun 6 (UTC)
Fair enough if thats the Wikipedia guidelines. Jackliddle 18:46, 6 Jun 2005 (UTC)

I suspect this entire article is an academic hoax akin to the postmodern hoax by professor whats his name of the university of wherever.

I'd never heard of this but the maths seems to check out and its certainly fun trying to refute it. I think its a interesting problem that teaches us a lot about the application of statistics. Jackliddle 17:16, 15 Jun 2005 (UTC)
No, this is not a pseudo-scientific hoax. What this is is an illustration of mathematical reasonings some appropriate other less so and all appled outside of an overall sound context. FTR, the principle commonsense flaw in the argument is the assumption that human life times would be fixed long after an immanent technological singularity, since however "close" such a conjuncture is, the assumption that current time has no more than 5% of species members ever assures that it is essentially in that current time. This in turn also interacts with the assumption of limited carrying capacity of the earth to drastically affect the very basis of the whole illustration/thought experiment.Lycurgus (talk) 01:13, 29 January 2008 (UTC)

the infinite N objection

The article mentions that Andrei Linde objects to the DA based on the idea that N may in fact be infinite. The Wikipedia article has this to say of that objection:

if N really is infinite, any random sample of n will also be infinite, with the chance of finite n being vanishingly small... In fact, Leslie takes finite n to be "superbly strong probabilistic grounds for rejecting the theory" of infinite N.

Isn't this actually very poor reasoning? We pretty much know for a fact right now as of the time I write this, that n is finite. At some point in the past, there was a time before humanity, right? There was a first human born at some point. And if n is finite at any given point in time, then the only way it could ever become infinite is for there to be an interval of time between now and some future point where the human birth rate is infinite. But since the birth rate can never be infinite (for it to be infinite, there would have to be some sort of "factory" capable of cranking out an infinite number of humans in a finite period of time -- not possible in our universe, right?), it is a mathematical impossibility for n to be infinite, regardless of whether N is infinite.

n is a function of time. You can in fact say n(t). It is impossible for the function n(t) to ever take on an infinite value. It is possible for to be infinite, but that is not the same thing at all!

Therefore, how can our observation of finite n have any influence on the likelihood of finite or infinite N? If N were finite, then n would be finite. If N were infinite, then n would still be finite because at the time any particular human is alive and thinking about n, it is impossible that an infinite number of humans lived before him. We can't prove that humanity won't live forever, but surely we can be confident that humanity hasn't always existed for all time, right?

So, it seems to me that these grounds for arguing that N cannot be infinite use bad reasoning at best. It is not possible to take "any random sample of N" because of the involvement of time.

---

I was going to comment exactly the same thing. The number of humans already born must (under "reasonable" assumptions) always remain finite, yet if time continues arbitrarily, an infinite number of humans may be born. I think the remark should be removed until someone explains the argument better (if there is indeed a correct argument here). -Benja

The claim that the chance of n being finite is vanishingly small seems entirely false. There are infinitely many natural numbers, but since each natural number is finite, if you were to somehow pick one out at random it would necessarily be finite. The remark seems to be based on the (bizarre, but to some people intuitive) idea that there are only finitely many finite natural numbers, which is not the case. I've removed it from the article. Factitious 12:49, August 29, 2005 (UTC)

new rebuttal - your thoughts appreciated

I want to make a rebuttal against the Doomsday Argument.


At first I thought the part "The doomsday argument is based on arbitrary hypotheses" would explain it, but I don't think that anymore. Basically there it is stated that choosing the interval (0.05,1] is arbitrary and any other interval (of the same size) could be chosen instead. But still, as long as you don't take (0,0.95], you will still get an upper limit to lifespan, and otherwise you will just get no statement at all. Now you can get several upper limits at the same time, but that is in itself no contradiction; the lowest upper limit simply yields the most information.

But obviously there has to be something wrong about the argument, after all you could apply this argument to *any* situation where you have an finite number of elements. Also, the very first human on earth using the same reasoning would come to the conclusion that it is very likely that only 20 humans will be born.

Now this leads me to what I think is the major mistake here, obscured by a vague setup (like the statement "finding oneself at position n" etc): The 'likelihood' of a situation not only depends on the probability of a single event but also on the sample size or number of realizations. So while it may be very unlikely for you to win in the lottery, when millions people play there probably will be some that win (who then will rightfully wonder about how unlikely it was to get the prize).

So in the doomsday argument basically the following happens: We have a bucket with balls numbered 1 to N, and the total number N is not known. Now we draw one ball which reads a certain number n, and then we conclude that it is rather unlikely for N to be "much" larger than n (or to stay with the DA, unlikely for n being in the first 5 %, n/N<0.05)). Now this reasoning itself is questionable when you are a "frequentist" (at least after what I have read here on Wikipedia), but it is not my point here. I rather question this whole approach to the problem. After all, if we for example draw every single ball from the bucket until it is empty, then we of course will have balls that read low numbers, and the fact itself that there are balls with low numbers doesn't imply anything (other than that N is equal or greater).

Now this seems to describe what happens here in a better way. Basically we have a group of N people and everyone gets a number out of {1,..., N}. Now of course there will be people with the numbers out of the first 5 %. There will even be someone with the number 1, and he will think "damn how unlikely is that, me getting the number 1!" But of course, as instead of just making one single trial, in our setup every single possible outcome is realized, even the 'unlikely' ones, so there is nothing special about it... and if human number 1 actually knows that everyone else also got a number, he cannot conclude anything else about the total number N apart from N>= 1.

What do you think?

Dreiche2 18:11, 14 November 2005 (UTC)

The Doomsday argument is a probabilitstic argument; it's not an argument that humans absolutely will go extinct within X period; it's an argument that there's a 95% chance humans will go extinct within X period. You are just addressing the other 5% chance by addressing the low numbered people.Warren Dew (talk) 03:18, 28 March 2009 (UTC)

Life on earth

Could the same be said of life on earth? There has been life for the last 4 billion years; therefore there's a 50/50 chance that there will continue to be life for another 4 billion. (I guess other factors come into play there). Also, why doesn't the argument work if I look at generations past rather than all specimens in the past? That is, there are probably a trillion generations before us. But clearly, most of us will not have any descendents in the relatively near future. The doomsday argument should consider this, since it doesn't really matter if our descendents that survive are human or not. Neurodivergent 20:00, 14 December 2005 (UTC)

I post this here because i don't know where else. There's one important point noone bothered to even look at: There's no way one can determine the number of humans that have ever lived. There was never a moment in time where the child of two hominoid ancestors of humanity was a human. The number of human beings that lived on this planet is uncountable. The borderline is fuzzy. You could say, Ok we HAVE to start with the beginning of life on earth. But there's also no way of counting how many living beings ever existed on earth. When did life start?
Anyways, the argument is complete crap nonetheless, because the assumption that there will only ever be a finite number of humans is so completely nonsensical. Even if we assumed, that the universe is completely deterministic and one can model the total-ever-to-live number of humans as randomly chosen from the whole set of integers, there's no statistics one can use to handle that. Such a random value wouldn't, for example, have an expectation value.
So, as the random variable N isn't really a well defined random variable (or even if it were it didn't have any "nice" properties), n/N (a function of a random value is again a random value. and a function of two random values n and N is also a random value) doesn't have any "nice" properties either. For example, there's no well defined distribution for this kind of random value. Or maybe there is (i have a fuzzy feeling that there might be), but even then it will be nothing like a uniform distribution. FlorianPaulSchmidt

Another refutation

The argument is roughly correct for any human who originally thinks of the argument. But let's consider all humans who will ever think about the argument independently. Roughly 5% of them would be wrong in their conclussion, correct? In particular, the first person to come up with the argument is, in all likelihood, wrong. The first published version of the argument in human history may also be concluded to be likely wrong.

One flaw with the refutation: Not all humans in history are equally likely to come up with the argument first, as it requires probability theory to be invented. So it could very well be an event that occurs towards the end of the existence of the species. However, this is an unknown and no conclussions can be drawn really.

Neurodivergent 17:27, 4 January 2006 (UTC)

The argument is damn conservative

A significant flaw of the argument is that it assumes that the person asking the question could have been any one of humans numbered 1..N. This is totally wrong. Humans with no knowledge of math could not have asked the question, nor could have humans who die at a young age. The argument is really only informative about those who come up with the argument independently. But this is of interest. Let's assume Heinz von Foerster was the first to come up with the argument. The article does not mention a year, but let's assume it's 1950. Someone coming up with the argument today could conclude that there's a 50% chance that by the year 2060 humans with knowledge of probability theory will cease to exist. This is a assuming uniform distribution of math knowledge through the years, which isn't right, so perhaps something like 2030 might be more accurate. This does not necessarily imply extinction of the species. A number of other scenarios are possible. Here's one: Require teaching of the doomsday argument to all math students, so they are unable to come up with the argument on their own :) Neurodivergent 20:44, 4 January 2006 (UTC)

I'd like to request that you reserve this talk page for discussion of its associated article, not for discussion of the Doomsday argument itself. New refutations by contributors cannot be included in the article under Wikipedia's policy on original research, so the article's talk page isn't the ideal place to present them. Thanks. -- Schaefer 00:04, 5 January 2006 (UTC)
I'm aware of the NOR policy. That's why it's here and not in the article. Is it a big deal to post things like these here? Neurodivergent 14:25, 5 January 2006 (UTC)
Please see Wikipedia:Talk page guidelines. I'd just like to see talk pages used constructively toward writing an encyclopedia, which is what we're all here for. I didn't mean to single you out from the others on this page that have been using it more as a general discussion forum about the Doomsday argument. It's just that discussions here have grown long enough that I think it's worthwhile to put up some kind of reminder that this is a talk page for discussing ways to improve the article. -- Schaefer 20:38, 5 January 2006 (UTC)
Right. So the question is, would prolonging this discussion eluicidate, improve or correct any of the sections in the actual article? Now I think it might, since it seems to be playing around with how observer biases might affect the argument, but unless this can be sourced elsewhere, it is original research, unfortunately. --maru (talk) Contribs 22:07, 5 January 2006 (UTC)

How can refutations of the argument presented not be relevant to the encyclopedia article? Are only the objections of academics relevant? My own objection to the argument is that it would always have been true, as far back as the 10th human to ever exist. There was still a 95% chance he or she was in the last 95% of people ever to be born, and obviously that wasn't the case. I don't see how there can be a 95% chance of anything that is completely unknown. Whether we are in the last 95% is something to which there is a definite answer, but about which we have no information whatsoever. Think of this in terms of Zeno of Elea's famous paradox. There is a 100% chance we are in the last 100% of people to exist. But halfway between 95% and 100% is 97.5%. There would be a 97.5% chance we are in the last 97.5%, which gives us more time than 95%. But to reach 100% you always have to cross the halfway point. You can never get to 100% this way, and as the certainty increases, so does the amount of time before extinction- to infinity. There is a 99.9999999999% chance we are in the last 99.9999999999%. The maximum amount of time left approaches infinity as we approach 100% certainty. Meaningless.

Me... Me + 1.... Me + 2...

If I have a baby, the world population increases by one, right? That means I should be one step closer to probable obliteration. Except that my baby will run the exact same argument, and come up with the solution that he/she is 95% likely to be in the last 95% of humans, which is now 60 billion + (however many births intervened between mine and the baby's) + 1 (the baby). Hence my child will conclude that the probable life-span of the human race is higher than what I calculated, using the same analysis... and if he/she reports her result to me, I'll be forced into a contradiction (the child's birth implies that we are one step closer to doom, and the child's birth permits the child to use the same mathematical analysis as I to demonstrate to me that I am one step farther from doom). Amusingly, no one on earth should be able to agree about the probable lifespan of the human race, because it is different for everyone (because a different number of humans was born before each of us, though the definition of before I guess requires a priveledged reference frame, but whatever)...

Another way of saying this is that: at every birth, the probability that WE'RE ALL GONNA DIE IN X TIME AUGH! should be recalculated, because it is 95% probable that this baby is in the last 95% of the human race. As the population grows, it will never be the case that our doomsday estimate will shrink, it must always grow.

An even better way to say this is to look at each individual on the street and calculate a time stamp on the basis of how many people have come before them... lessee.... you're number 60 billion and 5... so it's 95% probable that... lesseee... you're number 60 billion and 80, so it's 95% probable that... gee, different numbers for each. Oh dear, now what do I do? I'm stuck in a contradiction! (Probable lifespan = 9210 years, 95% certain, Probable lifespan = 9214 years, 95% certain, Probably lifespan = 9213 years, 95% certain... all those 95% certainties in one place! eek!)

A way to make this triply absurd: given that the rate of children being born each day is increasing, the rate at which the probable lifespan of the human race is changing is increasing also. Shucks. Our probable lifespan is accelerating away into the future... oh noes...

And to elaborate on an argument made above, which is "why is N=#Humans"... we might also say "this is 95% likely to be a member of the last 95% of the tacos ever made at a Taco Bell," "this is 95% likely to be a member of the last 95% of humans to die in a car crash," "this is 95% likely to be a member of the last 95% rain storms ever on earth" (how many rain storms have there been since the dawn of time? a lot, I would imagine) "this is 95% likely to be a member of the last 95% of times that the sun will rise" (there, that has an easy rate, and a big freaking number of days since the dawn of the earth). "this 95% likely to be a member of the last 95% of wikipedia articles I ever randomly click on," etc.

It's a really stupid argument, and I'm amazed that I just wasted time refuting it. How does anyone take this seriously? Doesn't it occur to you to actually apply your method to probable counter examples? Whenever analysis has surprising results, you don't *gasp* and embrace them, you make sure you aren't being completely absurd, apply them to some random problems... etc.

That's not how it works. Every human to ever live could make the argument, and clearly, 5% of all humans would be wrong, whereas 95% would be right. The fact that 5% would be wrong does not disprove the probabilistic argument. In fact, no refutation of the doomsday argument is totally convincing. The best refutations I've come across are those that deal with reference classes, with the fact that we're the first humans to learn of the doomsday argument, and the fact that learning of the doomsday argument could affect outcome. Neurodivergent 14:40, 27 April 2006 (UTC)
5% would be wrong, and 95% would be right, and no one has any idea whether I would be in the 5% or the 95%. The odds are not relevant if the future is fixed; there is a 100% chance that whatever will happen will happen. That you don't know what that is doesn't make it any less than 100% certain. If I'm in the 5% I'm in the 5%, and if I'm in the 95% I'm in the 95%, and there is no other criteria whatsoever for determining whether I'm in the 5% or the 95%. 95 is an arbitrary number. As you change the number to make the probability higher, the possible lifespan determined by that probability grows longer, and 100% gives you eternity. The only thing changed by the doomsday argument is inside people's heads.
Well, if the Copernican principle is applicable to you, then there's 5% chance you'd be in the 5% group. That's all the argument is. Of course you're correct that 95% is arbitrary. You could also ask if you're in the first 50% or the second 50%. The 95% limit is selected as one where a hypothesis can be probabilistically rejected. Neurodivergent 23:00, 8 May 2006 (UTC)
Again, there is no chance involved. I am in the 5%, or I am in the 95%, one or the other. Whichever it is is a cold hard fact and there's 0% chance I'm in the other group. We don't know which it is either way, nor can our idea alter the lifespan of humanity accordingly. "The 95% limit is selected as one where a hypothesis can be probabilistically rejected." - I'm not ashamed to say I have no idea what you mean. If 95% is good, then 96% is better. 99.9999999999999%, even better. To approach 100%, the time limit approaches infinity.--Badmuthahubbard 08:23, 14 May 2006 (UTC)
Again, there is no chance involved. I am in the 5%, or I am in the 95%, one or the other. Whichever it is is a cold hard fact and there's 0% chance I'm in the other group.
Careful there—you're declaring a victory for what's now widely regarded as the losing side of a mathematical debate that's been going on since the 1700s. Not that there's anything wrong with that, as long as you're aware and have more justification than mere assertion for such an extraordinary claim. If not, see Bayesian probability or, if you have the time, the late E. T. Jaynes' unfinished masterpiece, Probability Theory: The Logic of Science. -- Schaefer (Talk) 14:02, 20 July 2006 (UTC)
Declaring a victory for?... well defending, maybe. I would love to know more about the math behind this, but no, I don't have time right now, although I don't see any flaws in what I said. That the lifespan of humanity is finite is an assumption, as I think the article mentions. And even assuming it, like I said, raising that 95% certainty closer and closer to 100% certainty extends the relevant lifespan closer and closer to infinity. This doesn't even contradict the argument itself, it is merely a logical extension of it.--Badmuthahubbard 21:26, 25 July 2006 (UTC)

How did dinosaurs last so long?

Doesn't the same calculation apply to them? They were around for > 150 million years. I haven't yet had a chance to read this article (maybe tomorrow) but that was the first question that popped into my mind. Phr (talk) 13:28, 20 July 2006 (UTC)

Absolutely. Let's say all dinosours thought of the doomsday argument. Clearly 95% of them would've been correct in their assesment. Neurodivergent 14:12, 29 September 2006 (UTC)

I see three serious problems with the Doomsday argument:

1) Because probability is not inherent in an event, but instead is a reflection of our knowledge about the event, a probability value derived from a principle of indifference and vague priors has little value because it is likely to change radically as soon as factual knowledge is applied – for example extinction data for other species.

2) The assumption that we are asking these questions at no special moment in human history is almost certainly invalid. When we were living in caves, we lacked the knowledge and mathematical sophistication to develop these arguments. At our present point in time, give or take a few tens of thousands of years, our curiosity makes this question almost inevitable, and so it is extremely unlikely that we would go for many more millennia without asking it. Because this is therefore a relatively fixed point from our beginnings, its randomness, which is necessary for the argument, cannot be assumed and is probably false.

3) Even if there were nothing special about our desire to develop the argument at this point, there is still a problem. The underlying assumption of all versions is that if we arbitrarily select a point in the history of a species, it is as likely to be in the second half as the first half of that species’ existence. This is true, however, only if the species could have extended backward in time without constraint – i.e., if there is symmetry between the history and future of a species. However, life on earth only emerged three and one half billion years ago, vertebrates later, and homo sapiens, via evolution, a few hundred thousand years ago. Assuming that in theory, a species might enjoy either short or long longevity, no very long lived species could possibly be at its midpoint 300,000 years into its existence. The assertion that we might be at the midpoint requires the implicit assumption that we might have originated an arbitrarily long time ago, but that is evolutionarily impossible. This constraint on the length on our past forces the calculations to overstate the probability that we are at a late stage in our existence. I believe Gott tried to test his Copernican principle by applying it to the longevity of shows on Broadway, and asserted it to be accurate. That worked, however, only because the longevity of almost every show is much shorter than the longevity of the genre, shows. If the genre only originated 100 years ago, let us say, and shows had the capacity to last for thousands of years, with their longevity evenly distributed over that range, then the vast majority of shows today would be in their infancy. In fact, one could extend that argument to almost anything, including the human race, extend the potential ranges of longevities to arbitrarily extreme limits, and thereby prove that the human race is likely to survive for many trillions of years. Fmoolten 00:53, 9 September 2006 (UTC)

Could use votes to save this article, thanks MapleTree 22:21, 28 September 2006 (UTC)

"we are within the earliest 5%, a priori"

What is that supposed to mean? Is it a spelling mistake? Is it actually "priority"? AstroHurricane001(Talk+Contribs+Ubx) 23:06, 9 January 2007 (UTC)

It means a priori. --Gwern (contribs) 05:22 10 January 2007 (GMT)

WMD era humans?

Can species not go extinct without WMDs?—Badmuthahubbard 03:40, 7 February 2007 (UTC)

Section "Simplification: two possible total number of humans"

This section proposes two scenarios: early doom (N=N1=60billion) and late doom (N=N2=6000billion). According to the DA, people born before N1 (n<N1) should believe early doom (N1). However:

If N1 is true

- If N1 is true

 and people follow the DA (predict N1), then 60 billion people will be correct

- If N1 is true

 and people do not follow the DA (predict N2), then 60 billion people will be wrong

If N2 is true (I'm assuming that in that case, if you are born at n>N1, you can discard the N1 scenario )

- If N2 is true

 and people follow the DA (predict N1), then 60 billion people will be wrong

- IF N2 is true

 and people do not follow the DA (predict N2), then 60 billion people will be correct

The point is that if n<=N1, it does not really matter if you follow the DA or not (i.e. predict N1 or N2), you still have a 50/50 chance of being correct.

Given the time that very smart people have been discussing the DA, most likely I'm wrong :), that is why I', writing this on the Discussion page only.

I also wanted to ask, is there any way that the DA can be empirically refuted?. I mean, can the DA be falsified?. Otherwise, I don't think the DA should be taken seriously.

  • The DA is not a general theory. It's a specific prediction. Should the prediction be realized, and humanity does go extinct within 9,000 or so years, that would be a confirming instance for the theory BEHIND the DA (namely, the specific assumptions required for the DA). Should the prediction not be realized, it would be evidence against the theory at the 95% confidence level. I don't see any good way to perform an experiment to test it, though; we just have to wait and see. --128.12.103.70 (talk) 17:34, 8 April 2008 (UTC)
  • Oh, as for your argument, one problem is the point where you write "you still have a 50/50 chance of being correct." Without knowledge of the relative probabilities of N1 and N2, we can't say that there is a 50% chance of each. Suppose that N1 is an unlikely scenario, with only a 10% probability; then people who follow the DA are wrong 90% of the time. Conversely, if N2 is an unlikely scenario, then people who reject the DA are wrong 90% of the time. See the principle of indifference; your argument depends on an application of this (somewhat dubious) principle.--128.12.103.70 (talk) 17:40, 8 April 2008 (UTC)

unwarranted assumption

...: that humanity's extinction is inevitable.

how come thats an unwarranted assumption? Theres surely some average lifespan of a species in earths history, wouldn't humans be a rather odd exception if they wouldn't go extinct eventually? Seems to me as commonsense as suggesting that its inevitable I die one day. So why is this assumption odd? --83.131.148.241 15:21, 10 August 2007 (UTC)


Clearly there's an upper bound on the longevity of the human race, if only because there is an upper bound on how long the universe itself will host life before entropic heat-death. 70.90.171.153 (talk) 21:06, 5 December 2007 (UTC)

The heat death doesn't necessarily imply life must end; see [2]. Both Dyson and Tipler have listed scenarios in which an open expanding or closed contracting universe (respectively) allow life to live forever in some sense or another. --Gwern (contribs) 04:42 9 December 2007 (GMT)

Human population stabilization?

Just some ideas I had while reading the article - don't know if they have been adressed or could be incorporated...

Why assume that the human population will stabilize at 10 billion? For all we know, it could go on increasing until every tiny part of the earth is covered with humans, or we might emigrate to other planets, or maybe there will be some major catastrophe leaving only relatively few humans to build up a new population. An awful lot could happen over the next few thousand years, and it would be foolish to try to predict it! However, without the assumption of stabilization, the conclusions of the DA seem pretty meaningless - what's the point of knowing how many people will be born if you don't know when?

On the other hand, if we assume the population has stabilized, as necessary for the estimate of 9120 years, it sounds like we've really got our act together - no major wars, famines, natural disasters etc., or at least ways to alleviate them. So why should we suddenly become extinct? —Preceding unsigned comment added by 91.47.85.148 (talk) 15:45, 1 May 2008 (UTC)