Talk:St. Petersburg paradox/Archive 2
This is an archive of past discussions about St. Petersburg paradox. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | Archive 2 | Archive 3 |
The essence of the paradox
The essence of the paradox is that this betting system resembles the Martingale system of betting, in which you gamble until you win back your losses. There are two conflicting approaches to calculating the expected value from using the martingale system of betting a number of times
> On the Martingale page, it is stated that the expected value of each bet is zero, and since these can simply be added, the expected value of a series of Martingale bets is zero. (if the casino doesn't take any cuts)
> On this (the St Petersburg paradox) page, the probability of losing with the Martingale system is ignored, since ti approaches zero as the number of bets approaches infinity. This page, therefore, presents the expected value of winnings as infinite.
Now I don't have any prior experience in this area, but this, to me, presents itself as the most obvious contradiction! —Preceding unsigned comment added by 124.191.65.43 (talk • contribs)
- The St Petersburg paradox does not concern a series of bets, but one bet on a single series. —SlamDiego←T 07:20, 18 July 2007 (UTC)
This is the essence of the paradox:
- Here is a game in which the expected payoff is not well-defined.
- Question: What is the expected payoff?
- Answer: it is not well-defined.
The 'paradox' arises from a failure to handle a divergent series with due rigour. AnotherJohn 21:16, 11 November 2007 (UTC)
- Yes, if you ask a contemporary mathematician this is the way she would put it. If you ask an economist or philosopher they might have a completely different approach to it. Had a quick look at the article now and this contemporary view of the mathematicians is lacking in the article. We should really add this view too. iNic 19:37, 13 November 2007 (UTC)
- We should only add this view if we can find a reliable source for this view that we can cite. Note also that if the fee is $5 instead of $25 (or, if, for a $25 start fee, instead of doubling the pot, it is tripled each time), quite a few "fairly reasonable" people would gladly play the game. Yet the mathematical analysis is not essentially different. --Lambiam 11:00, 14 November 2007 (UTC)
- The real conundrum is why the St. Petersburg "paradox" is taken seriously at all. And it is; a Google Scholar recent article search on "St. Petersburg paradox" brings up 6970 hits. Yet the St. Petersburg problem is mathematically identical to the double-your-bets Martingale betting system, with the same solution: the hidden and unrealistic assumption that someone actually has infinite resources. I say actually infinite because in both problems large is not a good approximation to infinite due to logarithmic growth. Yet we don't hear economists wringing their hands asking why people bother to work since "in theory" they could just go to casinos and play the Martingale. But we need a source to say this in the article. --agr 12:16, 14 November 2007 (UTC)
Not a paradox
The following sentence in the introductory paragraph makes it sound like decision theory is wrong with respect to this problem:
- "The St. Petersburg paradox is a classical situation where a naïve decision theory (which takes only the expected value into account) would recommend a course of action that no (real) rational person would be willing to take."
In fact, the calculation of the expected value of the game using decision theory is correct: it is truly infinite. For any limit it can be shown that one only needs to play the game long enough to exceed it. The growth is logarithmic as noted later. So this in essence is not a paradox at all: the fair price of the game is infinite provided that we keep the problem in the 'ideal' domain. Tough luck to the mortal that cannot comprehend ideals and so calls it a paradox.
What in my opinion needs to be emphasized is the difference between the ideal and the real game. There is no paradox in the ideal game. And there is none either in the real one because the game has suddenly all sorts of limits so the expected value is not infinite any longer. Only when we mix up the two worlds due to sloppy thinking does this become a paradox (in the sloppy thinker's mind). Petersburg 20:51, 12 August 2007 (UTC)
- The usual way decision theory (and maths in general) is applied to real-world problems is: (1) you make an abstract ("ideal") model of the problem, leaving out hopefully irrelevant detail; (2) you solve the problem in the mathematical model; (3) you translate the results back to the real world. In this case, the naïve application of decision theory leads to omitting the (usually irrelevant) detail of actual limits on the playing time from the model, and thereby to a result that does not apply to the real-world situation. In general, a paradox arises when two different lines of reasoning, each by themselves appearing quite plausible, lead to contradictory conclusions. In this sense, the St. Petersburg paradox is a real paradox. And in any case, that is the name by which it is generally known. --Lambiam 10:36, 14 November 2007 (UTC)
- One more comment: the word 'naive' is important to understand correctly here, since it does not mean that all decision theorists are naive or something along that line. (If it can be misunderstood this way, we should change the sentence!!!) No, it means just that a decision theorist ""who is naive"" would come to a strange conclusion. No non-naive decision theorist would come up with the idea that the expected value is the sole decision criterion. (Why should it? - Besides that some stupid high school teacher tell their students, but that's another problem...) But when people (mathematicians, in this case) first seriously studied such lotteries, they were truly naive, in that they studied these problems for the first time and just tried out something simple. And the St.-Petersburg paradox told them that there is something wrong with this simple approach, so they thought again. This lead to the development of expected utility.
- Thinking about the sentence under question again, I would change "naive decision theory" to "naive decision criterion", that saves readers from misunderstandings, I guess... Rieger (talk) 11:43, 6 February 2008 (UTC)
- Isn't it rather "the naïve application of decision theory" that leads to the paradox? --Lambiam 16:43, 6 February 2008 (UTC)
Proposed External Link
I think The Unravelling of the St. Petersburg Paradox might be interesting to visitors to this article. The page looks like a sell, but it's actually a full preview and a free download. It can also be found in larger font half-way down the page here - the section with the yellowy background. If anybody thinks it's worth a click, could they place the link in the External Links section. It contains a simple formula for calculating the fair value of the game based upon limited time to play that isn't in the Wikipedia article. --Vibritannia 16:58, 27 November 2007 (UTC)
Barzilai refs
Several references to the work of Barzilai were just added. They don't seem to deal with the St. Petersburg paradox directly and are cited as technical reports which suggest they have not appeared in peer-reviewed publications. I'd like to get other opinions before deleting them.--agr (talk) 17:39, 16 January 2008 (UTC)
- I removed it. Barzilai is still present in the see also section. iNic (talk) 02:01, 17 January 2008 (UTC)
- See-also mention also removed as being irrelevant. --Lambiam 08:23, 17 January 2008 (UTC)
- I looked at these two papers; there was nothing in them that was both original and correct. The author seems not to have engaged any utility theorists in discourse (here indeed peer review would have helped) nor to have investigated much of the primary literature before barrelling ahead. And agr is correct that the connection ot the St. Petersburg Paradox is merely implicit. —SlamDiego←T 02:39, 17 January 2008 (UTC)
Martin article
Although I think that the article of Martin is interesting as a reference that points out how philosophers discuss about this issue, should we not at least hint that there are some points in the article by Martin that would be quite disturbing to any economist or decision theorist? (To say it bold: they are just wrong.) Otherwise I feel a little bad if we send off readers so uncritically to this source...
Suggestion: let us add at least the nice word "controversial"... Rieger (talk) 11:52, 6 February 2008 (UTC)
- Arguably, any theory based on explaining the St. Petersburg paradox is just wrong because the paradox requires an assumption of infinite resources, an assumption no reasonable person would make. Unfortunately, we can't editorialize, we need sources.--agr (talk) 02:01, 7 February 2008 (UTC)
- Perhaps controversy can be found, though my cursory skimming of a Google search did not find anything. —SlamDiego←T 02:24, 7 February 2008 (UTC)
- Yeah, that article is… unfortunate. —SlamDiego←T 02:27, 7 February 2008 (UTC)
What is unfortunate isn't the article in itself but that this is the only reference to a current philosopher in this section. If we add some more opinions here it is obvious even to philosophical virgins that Martins article is just his opinions. I will look for something to add here. BTW, what about an article entirely devoted to the history of this paradox? I think that would be quite interesting. I can start one and see what you think. iNic (talk) —Preceding comment was added at 19:10, 8 February 2008 (UTC)
History of the paradox
I'd rather see a section "History" here than a separate article as suggested above. If it grows too large, we call always do a spinout then. --Lambiam 00:07, 9 February 2008 (UTC)
- Just a suggestion... The economist William Petty already wrote about this in "Political Arithmetic" from 1690-ish. He had stated, along the lines, "The more lottery tickets you buy, the more likely you are to lose, because if you buy all of them, then you certainly have lost." —Preceding unsigned comment added by 209.213.84.10 (talk) 15:45, 10 July 2009 (UTC)
- Well, first, , he was writing about a different sort of lottery, one in which money for winning a lottery came exactly and only from purchases of tickets for that same lottery (and, indeed, the sellers extracted some accounting profit from the purchases); Petty's argument has no bearing on the sort of lottery proposed here. Second, Petty was writing well after Cramer and Bernoulli. —SlamDiego←T 17:38, 10 July 2009 (UTC)
Simpler but…
I reverted an edit which replaced
- For example the function is bounded above by 1, yet strictly increasing.
with
- For example the utility function is bounded above by 0, yet strictly increasing.
That second utility function is doubtlessly less complex, but it involves division-by-zero for a specific finite value of (), and the typical reader would be thrown by a measure whose values are everywhere negative. (I presume that this latter point is why the original editor didn't use .) Even
involves division-by-zero for a finite value of .
Certainly, the article on utility more generally ought to make it plain that functions that are everywhere negative are acceptable, but the present article shouldn't derail readers in with such tangents. —SlamDiego←T 03:50, 3 May 2008 (UTC)
- Assuming that the function only needs to be defined on the non-negative reals, a non-negative strictly increasing bounded function defined by a very simple formula is --Lambiam 12:42, 3 May 2008 (UTC)
- Which has the added virtue that isn't explicitly associated with any negative signs or subtractions, which associations can disturb less mathematically sophisticated readers. (“But wait, isn't more better?”) —SlamDiego←T 13:33, 3 May 2008 (UTC)
Dubious assertion in light of recent events
Article says "... sellers would not produce a lottery whose expected loss to them were unacceptable" however, here [1] is a real-life St. P's paradox. Ought to be included in the article I think but I will not trespass where my knowledge is limited.Cutler (talk) 16:33, 2 June 2008 (UTC)
- You seem to misunderstand the term “expected loss”. It is not the loss in a worst-case scenario; it is the probability-weighted sum of losses across various scenarii, and the relevant probability is that imputed by the decision-maker. Rather obviously, Hurman thinks that the probability of him failing is made tiny exactly by virtue of the cost of such failure. —SlamDiego←T 00:20, 3 June 2008 (UTC)
- I take the point.Cutler (talk) 15:40, 13 July 2008 (UTC)
That d_mn'd period
Lambiam, that's a misreading of the MoS. Look at the entire section “Typesetting of mathematical formulas”; you'll see numerous examples of formulæ that end sentences but aren't followed by periods. In the subsection that you cite, the formulæ that are followed by periods are blockquoted, rather than being merely displayed blockstyle; they would otherwise appear inline. —SlamDiego←T 08:50, 12 June 2008 (UTC)
- Maths should be treated as part of the sentences in which they lie, and hence should have commas and full-stops where appropriate, as said at Wikipedia:MSM#Punctuation. The examples at Wikipedia:MSM#Typesetting of mathematical formulas are only examples of how particular formulas will appear, not of how they should be punctuated. To reduce confusion you could replace your dot (for multiply) by proper multiply-signs, or rearrange so that multipliers appear before the summations. Melcombe (talk) 09:15, 12 June 2008 (UTC)
- Again, actual examples in the MoS perfectly belie the claim that the MoS requires punctuation for block-displayed formulæ. These examples could have been punctuated; there is no basis for claiming these examples are ‘bad examples’ that the reader is somehow supposed to guess or to infer to be such. At present, the MoS only requires punctuation for inline-displayed formulæ (as indeed it should). And using a different operator for multiplcation elsewhere in the article would not prevent the reader from wondering if the period was here an operator in a truncated expression. —SlamDiego←T 10:01, 12 June 2008 (UTC)
- Some editors of that section did not follow the punctuation rule. That does not mean the punctuation rule is invalid, it just means that they did not apply a rule that could have been applied. However, the MoS rules are intended for Wikipedia articles (in the "main namespace"), and the Wikipedia:MSM page is in the "Wikipedia namespace"; it is not an article. Just look at any professional mathematics journal, and you'll see that the articles follow the punctuation rule. Here is just one example of a well-set maths article; examples like this could be expanded indefinitely. --Lambiam 16:29, 12 June 2008 (UTC)
- I propose that you and I move this disagreement to the talk page of the MoS itself. Regardless of the fact that I would see support of the period as a change in policy, I would agree that the article should follow any MoS convention that results. Is this an acceptable path to resolution for you? —SlamDiego←T 02:52, 13 June 2008 (UTC)
- OK, if you feel the guideline is unclear or wrong, so that the rule "a sentence which ends with a formula must have a period at the end of the formula" is supposed to be, or should be, subject to a (thus far unwritten) clause "except if that formula is a display", then please take it up on the talk page of the guideline. --Lambiam 07:54, 13 June 2008 (UTC)
- I have attempted a fair statement on the MoS talk page at “Punctuation of block-displayed formulae”. —SlamDiego←T 09:09, 13 June 2008 (UTC)
- OK, if you feel the guideline is unclear or wrong, so that the rule "a sentence which ends with a formula must have a period at the end of the formula" is supposed to be, or should be, subject to a (thus far unwritten) clause "except if that formula is a display", then please take it up on the talk page of the guideline. --Lambiam 07:54, 13 June 2008 (UTC)
- I propose that you and I move this disagreement to the talk page of the MoS itself. Regardless of the fact that I would see support of the period as a change in policy, I would agree that the article should follow any MoS convention that results. Is this an acceptable path to resolution for you? —SlamDiego←T 02:52, 13 June 2008 (UTC)
- Some editors of that section did not follow the punctuation rule. That does not mean the punctuation rule is invalid, it just means that they did not apply a rule that could have been applied. However, the MoS rules are intended for Wikipedia articles (in the "main namespace"), and the Wikipedia:MSM page is in the "Wikipedia namespace"; it is not an article. Just look at any professional mathematics journal, and you'll see that the articles follow the punctuation rule. Here is just one example of a well-set maths article; examples like this could be expanded indefinitely. --Lambiam 16:29, 12 June 2008 (UTC)
- Again, actual examples in the MoS perfectly belie the claim that the MoS requires punctuation for block-displayed formulæ. These examples could have been punctuated; there is no basis for claiming these examples are ‘bad examples’ that the reader is somehow supposed to guess or to infer to be such. At present, the MoS only requires punctuation for inline-displayed formulæ (as indeed it should). And using a different operator for multiplcation elsewhere in the article would not prevent the reader from wondering if the period was here an operator in a truncated expression. —SlamDiego←T 10:01, 12 June 2008 (UTC)
Request for comments: punctuation after displayed formula
The question is whether a sentence ending on a displayed formula should have a period after the formula.
After the discussion above (#That d_mn'd period) and on the MoS talk page at “Punctuation of block-displayed formulae” one might have hoped the issue would have been settled, but apparently not;[2][3] hence this RfC.
- Why in the world was THIS particular article's talk page chosen for this question? We've had this discussion at length before, on appropriate pages. Michael Hardy (talk) 21:49, 17 August 2008 (UTC)
- Because of edits such as this and this. If you know of a better conflict resolution mechanism (given the preceding attempts for this case), I'd like to hear about it. --Lambiam 12:06, 18 August 2008 (UTC)
- Why in the world was THIS particular article's talk page chosen for this question? We've had this discussion at length before, on appropriate pages. Michael Hardy (talk) 21:49, 17 August 2008 (UTC)
Yes, a sentence ending on a displayed formula should have a period after the formula:
- Yes. --Lambiam 18:01, 8 August 2008 (UTC)
- I agree. —David Eppstein (talk) 18:20, 8 August 2008 (UTC)
- Yes. This is certainly a WP:WPM standard, and is a stylistic standard in all of the modern mathematics journals with which I am familiar. It is also quite unambiguously supported by WP:MOSMATH#Punctuation. siℓℓy rabbit (talk) 18:26, 8 August 2008 (UTC)
- As silly rabbit says, this is a near-universal convention in professionally published mathematics. According to section 14.22 of the Chicago manual, "Displayed equations must carry ending punctuation if they end a sentence." — Carl (CBM · talk) 18:38, 8 August 2008 (UTC)
- Agreed. Gandalf61 (talk) 20:16, 8 August 2008 (UTC)
- Yes, per WP:MOSMATH. Ozob (talk) 22:51, 8 August 2008 (UTC)
- Yes. Leaving out periods and commas can make complicated mixtures of prose and formulae harder to read. But I would suggest that English punctuation in a formula always be set off by a large horizontal space from everything else, like this:
- Otherwise, it will only make the formula even less legible. —Saric (Talk) 20:00, 9 August 2008 (UTC)
- Yes. I'm not a huge fan of ending a sentence with a mathematical formula in the first place and would avoid doing it if I could, but if it can't be helped then it should have the period. Reyk YO! 02:17, 11 August 2008 (UTC)
- Yes, whenever math expressions are put into the context of a (written) natural language. However, in listings or tables displaying several formulas, punctuation should be avoided. iNic (talk) 03:41, 11 August 2008 (UTC)
- Yes, maths expressions should be punctuated following the ordinary rules for sentences even when eqyarions are "displayed", and this should include both full-stops and commas. Melcombe (talk) 08:40, 11 August 2008 (UTC)
- Yes, per Melcombe above. Formulas are part of the sentence rather than some strange objects that need to be separated from the flow. Oleg Alexandrov (talk) 22:11, 11 August 2008 (UTC)
- Yes, I agree. Is this RfC resolved? Please change tags if so. Cool Hand Luke 22:04, 21 August 2008 (UTC)
- In my opinion, it should be considered resolved in favor of adding a period after every formula that comes at the very end of a sentence. (I don't like the resolution, but I believe that, under the customs and protocols of Wikipedia, it obtains.)—SlamDiego←T 22:56, 21 August 2008 (UTC)
- Yes, every English sentence should be terminated by a period, regardless of whether it ends with a displayed mathematical formula. Since the only editor arguing for the omission of periods now appears to have conceded that the issue has been resolved in favor of including them, I have replaced the Rfc templates with a resolved template. If anyone still considers the issue unresolved, please feel free to override.
No, a sentence ending on a displayed formula should not have a period after the formula:
Sometimes but not always The range of possible answers originally provided (“Yes” or “No”) begged an important question.
- Wikipedia should adopt the conventions of the discipline within which the article principally falls. The fact that an article uses mathematics doesn't mean that it should use the conventions of journals of mathematics. (The article referenced here is an article about economic theory, not pure mathematics.) —SlamDiego←T 04:35, 9 August 2008 (UTC)
Comment/further discussion:
I have added a more appropriate RFC tag to the this section. In the aforementioned discussion on the MoS talk page, I attempted a fair summary of the disagreement (and Lambian raised no objection to that summary). I am therefore sorry to see that Lambian's remarks above insinuate that the MoS discussion produced consensus. Not only were there very few participants in that discussion, but it didn't result in any changes to the MoS that would have resolved the ambiguity of the MoS (which indeed makes simple assertions in WP:MOSMATH#Punctuation, but itself follows a different practice when formula are given block displays). —SlamDiego←T 04:41, 9 August 2008 (UTC)
- Above you imply that journals of economic theory have the opposite convention. However, all the references in our article that have appeared in print, including those in journals of economic theory, follow the standard punctuation rule, with the possible exception of Menger's "Das Unsicherheitsmoment in der Wertlehre: Betrachtungen im Anschluß an das sogenannte Petersburger Spiel" (1934), which on page 466 contains one instance of a sentence-ending formula – itself ending with an ellipsis (three dots) – that is not followed by a period. --Lambiam 09:49, 9 August 2008 (UTC)
- I am implying that different disciplines have different conventions. If a survey of Anglophonic (since this is the English-language Wikipedia) journals of economics supports the use of punctuation for block-display formulæ, then consensus should be and probably will be that punctuation should be used just as if they in-lined. If Anglophonic journals of economics conventionally do not use block-display punctuation, then policy should be that Wikipedia articles on economics should not. If a survey indicates a toss-up, then more than one policy would be reasonable, but should be explicitly discussed. —SlamDiego←T 12:19, 9 August 2008 (UTC)
- As to the MoS discussion producing or not producing consensus – my interpretation is that four to one constituted a consensus. I did not see a need to change the MoS; this has been the first and only instance I'm aware of of an editor not accepting the rule even after it was explained to them. --Lambiam 09:59, 9 August 2008 (UTC)
- No, the reason that, for example, AfDs are routinely relisted when numbers are as small as that is that consensus isn't decided by mere vote, and especially not by tiny votes. The MoS itself does not conform to what you claim to be the proper interpretation of the rules. That's either because the rule is over-simplified in WP:MOSMATH#Punctuation, or because the non-conforming parts of the MoS are in error; something about the MoS should be changed. This is not a case of my refusing to accept a rule that has been explained to me, but rather of my not accepting an interpretation that has been explained to me. If you're not aware of interpretations not being accepted, as you haven't always been willing to accept the interpretations of others — in some cases for good reason. —SlamDiego←T 12:19, 9 August 2008 (UTC)
- Could you list the places in the MOS that don't follow the convention? I looked through MOSMATH but each sentence there seems right. Maybe they have already been fixed. — Carl (CBM · talk) 12:44, 9 August 2008 (UTC)
- If you'll look at edits made to the MoS after this RfC, then you'll find that one or more “Yes” participants here have inserted the missing punctuation. The problem with that WP:BOLD behavior at this time is that it of course has falsified the context of the argument. (Symmetrically, I might have WP:BOLDly resolved the difference by inserting explicit reference to inline formulæ in WP:MOSMATH#Punctuation.) —SlamDiego←T 13:07, 9 August 2008 (UTC)
- Could you list the places in the MOS that don't follow the convention? I looked through MOSMATH but each sentence there seems right. Maybe they have already been fixed. — Carl (CBM · talk) 12:44, 9 August 2008 (UTC)
- No, the reason that, for example, AfDs are routinely relisted when numbers are as small as that is that consensus isn't decided by mere vote, and especially not by tiny votes. The MoS itself does not conform to what you claim to be the proper interpretation of the rules. That's either because the rule is over-simplified in WP:MOSMATH#Punctuation, or because the non-conforming parts of the MoS are in error; something about the MoS should be changed. This is not a case of my refusing to accept a rule that has been explained to me, but rather of my not accepting an interpretation that has been explained to me. If you're not aware of interpretations not being accepted, as you haven't always been willing to accept the interpretations of others — in some cases for good reason. —SlamDiego←T 12:19, 9 August 2008 (UTC)
I grabbed some of the econ books and articles that I have immediately at hand, and began listing them below. It would be helpful if others were added to the list. —SlamDiego←T 13:10, 9 August 2008 (UTC)
- This exercise should of course be done for all disciplines, not just economy: abnormal psychology, accounting, acoustics, ..., zootomy. --Lambiam 21:36, 9 August 2008 (UTC)
- True, except in-so-far as the convention in economics has little relevance for that in acoustics and vice versa, and for the fact that economy (as a discipline) is the practice of frugality, whereas economics is the study of a cluster of problems concerning such things as the formation of prices. —SlamDiego←T 02:59, 10 August 2008 (UTC)
- Style manuals. The Journal of Financial Economics recommends putting punctuation after an equation (even if the equation is displayed on its own line), as does The Economic Journal. siℓℓy rabbit (talk) 01:05, 10 August 2008 (UTC)
Survey of Anglophonic economics sources
- Block-display formulæ punctutated:
- Hicks, John RIchard; Wealth and Welfare v1
- Arrow, Kenneth Joseph; The Economics of Information
- Debreu, Gerard; Theory of Value
- Kreps, David M. A Course in Microeconomic Theory
- The Journal of Financial Economics
- The Economic Journal
- Block-display formulæ not punctutated:
- Machina, Mark J.; “Expected Utility Hypothesis”, The New Palgrave Dictionary of Economics 2e. (draft copy)
- Quick MBA web site (the first example I found, though it is opposite to my personal preference) Matchups 21:21, 17 August 2008 (UTC)
- Web pages do not qualify; we want to look at professionally edited academic publications. --Lambiam 12:10, 18 August 2008 (UTC)
- Partial (at least periods, but not full punctuation):
Finite St. Petersburg lotteries — Again
I think the article would be improoved by emphasizing that Finite St. Petersburg lotteries is the solution to the paradox. Let's take the millionair backer case. None rational person would pay 9$ to enter a game with 50% propability loosing them, plus 0.000095% (1 in a million) propability of winning 1,048,575$. If this was a rational economic decision then she would put all her salary, say 30,000$, on millionair-backer-St.Petersburg-lotteries. In, say, 10 years she would have spend 300,000$ to gain a 3% propability of winning 1,048,575$ and still about 50% propability of loosing all. That is clearly not rational. There is no paradox.Vanakaris (talk) 15:46, 7 September 2008 (UTC)
- I tend to agree with you, but we would need a source of significant repute that made such a claim to say that in the article. Given the sources we have found so far, the best I think we can do is present the finite-resource solution as an alternative and let readers form their own conclusion. --agr (talk) 18:20, 7 September 2008 (UTC)
Obvious solution
I think that the St. Petersburg paradox is much more complicated than it needs to be. How about this much simpler game, instead? Suppose a game has a 1 in one trillion chance of winning a googol dollars, and you win nothing otherwise. Would anyone pay more than a few dollars to play this game? But the expected value of the game is 1088 dollars. It's obvious that people (other than pathological gamblers) wouldn't play this game simply because it's very unlikely that they would win anything. They neglect the incredibly unlikely event of hitting the jackpot, regardless of the payout. This is the Bernoulli solution of "probability weighting" where people neglect unlikely events. Perhaps prospect theory shows that people overweight small probability events, if they are not TOO small; but if the probabilities are incredibly small (like 1 in 1 trillion) then people underweight them. Bounded utility could also resolve this (once someone is sufficiently rich, does it make any real difference to them how rich they are? People have a limited amount of material needs/wants). But intuitively the reason I wouldn't play that game is I would completely discount the 1 in one trillion chance of winning. Halberdo (talk) 13:12, 16 January 2009 (UTC)
- People line up to buy lottery tickets when the jackpot gets in the $100 million range even though the odds of winning are minute and the expected value of a ticket is less than its price. The odds of winning a modest sum in the St. Petersburg game are much higher. Your "1 in one trillion chance of winning a googol dollars" example fails because no one can pay a google dollars. Indeed, the fact that there is a maximum possible pay out is the real key to the supposed paradox. Even a google dollar jackpot leads to a very finite (<$200) expected value for the St. Petersburg game, as the article points out.--agr (talk) 13:37, 16 January 2009 (UTC)
- One could posit a god or demon that has unlimited resources, and likes toying with the lives of men. Such a being would be required for the St. Petersburg paradox as well. The subjective, "intuitive" reason why I wouldn't pay much money to enter either game is because I'd discount the very small chance of winning, not because of reservations about the hypothetical casino's abiility to pay.
- It's interesting you bring up ordinary lotteries--I wouldn't play an ordinary lottery either, except for personal entertainment, even if the long-shot odds were in my favor. If a billionaire, for his amusement, sold one million $50 tickets to a lottery with a single jackpot of $100 million, limit 1 ticket per player, I probably wouldn't buy one.
- Anyway, whatever your views, I think that my trillion-googol game captures the essential quality of the St. Petersburg game--huge but very unlikely payout--while being much simpler. Halberdo (talk) 16:10, 16 January 2009 (UTC)
- The St. Petersburg paradox is a question in in economics, so we have to consider what broad populations would do. It's certainly true that some people, like you, will not make bets on low probability winnings. But there are many others that will and do. The supposed paradox is that almost no one, even those who happily buy lotteries, will pay a large amount to enter the St. Petersburg game. Your trillion-googol game might not be unattractive to lottery players. I suspect that if you could convince people of your premise--that a supernatural being would grant them a google dollars--many would indeed buy tickets. Certainly that is the experience with large jackpot lotteries. How many people really distinguish between 1 in a trillion odds and one in 100 million odds? Also it's important to note that the expected value of your game's tickets goes up linearly with the size of the jackpot. The St. Petersberg game grows logarithmically. So even with a $google payout, the St. P expected value is not big. --21:02, 16 January 2009 (UTC)
- The expected value of the trillion-googol game is $1088, but no gambler would pay a similarly large amount amount to play it (probably you'd have trouble finding anyone who would pay $1000). It is thus the same situation as the St. Petersburg game.Halberdo (talk) 22:35, 16 January 2009 (UTC)
- Again supra-astronomical sums like $1088 are not possible in real life and I don't see why the hypothetical failure of economic actors to respond to such fairy tales should be of much interest. None the less, I think you do have a point. I suspect most people evaluate the St. P game by making some estimate of the maximum conceivable number of successive head outcomes they could experience, say 15 or 20, and terminate the sum there. The fact that the St. P game offers a reasonable probability of small pay offs might reduce valuation of the larger but improbable outcomes. There are many interesting experiments that one could try and lottery marketing experts may have data that could illuminate some of these questions, but we can't speculate in the article.--agr (talk) 17:01, 18 January 2009 (UTC)
- The expected value of the trillion-googol game is $1088, but no gambler would pay a similarly large amount amount to play it (probably you'd have trouble finding anyone who would pay $1000). It is thus the same situation as the St. Petersburg game.Halberdo (talk) 22:35, 16 January 2009 (UTC)
- The St. Petersburg paradox is a question in in economics, so we have to consider what broad populations would do. It's certainly true that some people, like you, will not make bets on low probability winnings. But there are many others that will and do. The supposed paradox is that almost no one, even those who happily buy lotteries, will pay a large amount to enter the St. Petersburg game. Your trillion-googol game might not be unattractive to lottery players. I suspect that if you could convince people of your premise--that a supernatural being would grant them a google dollars--many would indeed buy tickets. Certainly that is the experience with large jackpot lotteries. How many people really distinguish between 1 in a trillion odds and one in 100 million odds? Also it's important to note that the expected value of your game's tickets goes up linearly with the size of the jackpot. The St. Petersberg game grows logarithmically. So even with a $google payout, the St. P expected value is not big. --21:02, 16 January 2009 (UTC)
Link to “Ellsberg paradox”
I'm not sure why one would link to the article on the Ellsberg “Paradox”, unless the intent is to draw attention to reasons for doubting the expected utility hypothesis. In which case, I'd note that the Ellsberg “Paradox” is hardly unique amongst empirical results that challenge EU maximization, so perhaps there should be links to the article on Allais “Paradox” and to those which cover things such as violations of acyclicity and so forth. Or perhaps the link to “Ellsberg paradox” should be removed. —SlamDiego←T 12:30, 17 January 2009 (UTC)
Style corrections to the introductory text
In my opinion, the lead section needs a bit of style-editing. These are my suggestions:
- It should not start with a circular definiton. We have a definition that the St. Petersburg paradox is a paradox related to probability theory. That's not the best way to introduce the topic! Imagine the article about Nash Equilibrium starting like this: Nash equilibrium is an equilibrium related to Game theory. Redundant, isn't it?
- What does the St. Petersburg paradox has to do with the city of Saint Petersburg? Can we write that in a more explicit way? Currently, the reasons for the name are not entirely clear in the lead section, in my opinion.
- The introductory paragraphs contain too many side-notes attempting to explain everything. Let's see: i. (theoretical); ii. (sometimes called St. Petersburg lottery; iii. i.e. infinite expected payoff; iv. (which takes only the expected value into account); v. (real); vi. (and it is one origin of notions of utility functions and of marginal utility); vii. (and that sellers would not produce a lottery whose expected loss to them were unacceptable). That's seven interruptions to the reading, most of them not needed. With some effort put into the editing, all those notes can be placed in, just by reordering the ideas here and there. --200.6.181.24 (talk) 04:53, 1 March 2009 (UTC) oops forgot to sign in: --Forich (talk) 04:55, 1 March 2009 (UTC)
- The lede could be greatly improved, but the article should not unnecessarily test the patience and discipline of the reader.
- Priorities should be set with an understanding that the lede is often the only thing read. With that in mind, a summary of the solutions would seem to have a better claim to a place in the lede than does an etymological note.
- I'm not sure that the lede should indeed provide an “executive” summary of the solutions, but some part of the article should.
- —SlamDiego←T 06:02, 1 March 2009 (UTC)
Rejection of mathematical expectation
I have included a new section, “Rejection of mathematical expectation” about a different class of resolutions. I do not know how to use the peculiar method of referencing that has been adopted for this article, and am frankly unwilling to learn it. I have embedded two HTML comments that provided two appropriate references, which I repeat here:
- d'Alembert, Jean le Rond; Opuscules mathématiques (1768) vol iv p 284-5.
- Keynes, John Maynard; A Treatise on Probability (1921) Pt IV Ch XXVI §9.
Someone who does know how to use the referencing system prevailing here should edit the section to present these to the reader. —SlamDiego←T 00:04, 2 March 2009 (UTC)
Python script
Hello. I have made a Python script that simulates the lottery and exports the results to a csv file. You can then import the file in OpenOffice Calc and make a nice graphic like in http://img31.imageshack.us/img31/1011/lottery.png. Also it prints a message whenever you win more than 500000 in a run.
runs = 1000000
price = 10
money = 1000
frequency = 1000
filename = 'C:\\Documents and Settings\\user\\Desktop\\out.csv'
run, toss = 0, 0
f = open(filename,"a")
import random
print("Running",runs,"times\n", "-"*60)
while run < runs:
while True:
money-=price
toss+=1
if random.getrandbits(1):
pot = 2**(toss-1)
if pot > 500000:
print ("Big win on run %i: %i" % (run, pot))
money += pot
if (run % frequency) == 0:
f.write(str(money)+",")
toss = 0
break
run += 1
f.write("\n")
print("Final money:%10i" % money)
f.close()
The code should be self-explanatory. Filename selects the output file, frequency selects how often will export results (Calc doesn't accept 200000 columns).Run multiple times and it will append the results in a new row. Enjoy. --79.154.236.26 (talk) 19:59, 16 May 2009 (UTC)
flawed discussion?
The discussion is flawed. Here an example, using non-infinite reasoning. A mail with similar content was sent to Mr.Martin, the author of the Stanford page.
Consider: You get a bet to either
- Win 2 billion with probability 1/1000 or
- Nothing with prob. 999/1000.
The bet costs 1 million.
Would you take the risk? Answer is: no, since in 99,9% of cases you end up (more than) broke. If you didn't inherit a lot, this seems unattractive. --Forich (talk) 04:55, 1 March 2009 (UTC) OTOH, if you are a big insurer with the pockets to buy 10 000 of these contracts? Then the answer is yes, since your probability of getting less than their 10 billion back is 0,029196 (binomial distribution with less than 5 hits out of 10 000 tries with prob. 0.001). If 20 billion are invested, that probability falls to 0,004979 (bin(9,20 000,0.001)).
In short, risk aversion flattens when averaged over a lot of people. The only difference in the original St. Petersburg Paradox is, that it flattens a lot slower than this example. The following happens to the dist. if restricted to 2^1-2^60:
n $ Prob. =================================== 1 2 0,5 2 4 0,25 3 8 0,125 4 16 0,0625 5 32 0,03125 6 64 0,015625 7 128 0,0078125 8 256 0,00390625 9 512 0,001953125 10 1024 0,000976563 11 2048 0,000488281 ... 56 7,20576E+16 1,38778E-17 57 1,44115E+17 6,93889E-18 58 2,8823E+17 3,46945E-18 59 5,76461E+17 1,73472E-18 60 1,15292E+18 8,67362E-19
Here, the stdev is 1518500250 (1.5 billion) with a mean of 60. In order to average that stdev out, a lot of contracts need to be sold.
(Assuming a 2-stdev distance from $25, a stdev of $12, and the minimum convergence speed in the Central Limit Theorem of stdev/n^0.5, a total of 3,32E+20 bets would be needed.)
- Interesting. I don't know enough maths to comment further :). If the article misstates the paradox or any of the common proposed resolutions, it ought to be reformulated. However, Wikipedia is not an academic journal, so if you're making a previously unmade contribution to the discussion (in the real-world, not Wikipedia) of the problem, then this is not the place to do it. Publish it somewhere reputable and then update this article accordingly, with a reference to the outside source. Joestynes 11:10, 20 Apr 2005 (UTC)
- "In short, risk aversion flattens when averaged over a lot of people." -- I don't think that this is actually new, nor does it contradict the article as written. Playing a game 10,000 times at once (as either 10,000 different subscribers or one player playing repeatedly) yields a substantially different and less risky "supergame". A lot of contracts being sold is simply repeated play of the game... Risk aversion doesn't flatten, what flattens is risk itself. Amortization/insurance/repeated-play can be used to reduce risk (reduce variation). --Brokenfixer 00:29, 16 January 2006 (UTC)
- In addition: notice that selling 10,000 shares to different subscribers not only reduces risk, it also improves the expected utility of the game as a separate effect. Even if you play the game only once, by splitting the cost of the game and its proceeds over 10,000 friends, you protect against the diminishing marginal utility of the money. 10,000 chances to become a millionaire is more attractive than 1 chance to become a 10-billionaire. --Brokenfixer 00:29, 16 January 2006 (UTC)
- However, since the expected mean value of the St. Petersburg game is infinite, it doesn't matter to the expected mean value whether you set the starting heads payoff at $0.01 or at $10,000. This is an apparent paradox. It can be cleared up by realizing that (1) the game cannot be realized in real-world terms (because no real-world agent can commit to a potential $10^google payout), or that (2) the utility of money is non-linear (and the expected utility of the game is finite), or that (3) rational economic decision theory must take into account inherent costs associated with volatility (risk). A factory that produces 10,000 widgets per day is more profitable than a factory that produces 310,000 widgets on some single random day each month, since the variable factory must pay much higher contracts for storage, delivery, sales, and so on. (In my opinion, all 3 of these resolutions have empirical support.) --Brokenfixer 00:29, 16 January 2006 (UTC)
- First, I hope I'm participating in this talk section correctly; I'm fairly new to Wikipedia editing. Second... a while back I wrote something about the St. Petersburg problem that I felt explained it (and also why it isn't really a paradox). I know Wikipedia is supposed to consist of information that is verifiable, not "true." But I feel that this article is flawed, or rather, not organized well, because there isn't really a paradox, so I'm just going to post what I wrote and hope it might help:
- The St. Petersburg paradox asks how much should a player (the casino's perspective will be extended upon later) should pay to play a gambling game, where gambling just means the ability to gain profit through random chance. The paradox is resolved when the decision making process behind gambling is further examined.//Say you have a one in one million chance to win one million dollars, how much should you pay to take this chance? The expected value is one dollar, so many people would pay one dollar. However, if you already had one million dollars, and you had a one in one thousand chance to win one billion dollars, would you pay the expected value (one million dollars) to play this new game? Most people would not. The reason for this is utility: in the first game, losing one dollar is insignificant, and gaining one million dollars is substantial. In the second game, losing one million dollars is significant, and the extra added money from gaining a billion dollars is unnecessary. Somebody who really needed every dollar he owned would not even participate in the first game.//This means that when an individual encounters a gambling game such as these, he will not base his decision primarily on expected value (though it is a useful tool), but rather he will base it on factors outside the game: his own wealth, his own desire for more wealth, his own ability to handle losses. A man who feels a great need for substantial sums of money will take more risks than a man who is content with his wealth.//Examining the St. Petersburg paradox once more: the price is not infinity, as expected value would predict it to be. The price is anything the player would want it to be, because the price is solely determined by outside factors. This also works from the casino's perspective: the casino must way its desire for increased income against the risk of large losses, which is based on factors like whether or not the casino has an infinite supply of money (and since none do, no casino in real life would participate in this game), and whether or not this added income is much needed.//To recapitulate: the expected value is infinity, but this is not a paradox because expected value is only one tool, one factor, used in determining price.
- Viewing the Petersburg problem in this light makes other refutations, like those based on real-world terms or utility, seem sort of extraneous or weak to me (though I did mention those in my own explanation). Yes, that's just an opinion, but maybe the article could still be reorganized in some way? Or maybe there's another reliable source out there that we're overlooking? —Preceding unsigned comment added by 98.180.35.37 (talk) 21:23, 12 May 2010 (UTC)
Paradox not explained
The expected gain is infinite? So any finite cost to play the game will result in an expected gain. Great! I don't see any paradox here. —Preceding unsigned comment added by 86.171.41.5 (talk) 10:47, 18 August 2009 (UTC)
- The apparent paradox is in the combination of few people saying that they would pay more than a relatively small price to play, and in our indeed not observing anyone buying the lottery. (If they can expect to gain from buying, why don't they?) The lede might make this more clear, but specific, non-economist editors tend to fixate upon one ostensible solution and attempt to structure the presentation of the problem accordingly. —SlamDiego←T 11:07, 18 August 2009 (UTC)
This is really only a paradox to people who cannot really grasp what is going on here. In mathmatics, the harmonic series (what's being used here) diverges to infinity. In practice, it diverges so slowly that basically any number you use produces a very low number. This is alluded to in the table, but extremely poorly. The solution is "while the harmonic series diverges in theory, it does not in any applied sense". —Preceding unsigned comment added by 160.39.221.249 (talk) 22:00, 6 September 2009 (UTC)
- That may be a solution; it is not the solution, as the paradox would resolve even were that not true. —SlamDiego←T 22:41, 6 September 2009 (UTC)
- How is this even a solution? How does its diverging slowly solve the paradox? The expected utility is still infinite, and it still doesn't seem rational to pay a large sum of money to play the game (whether or not it diverges slowly). --Beala (talk) 18:13, 22 September 2009 (UTC)
- I think that he's making claim of practical impossibility. Any “flip” would take finite time, &c. —SlamDiego←T 19:57, 22 September 2009 (UTC)
One issue with the St. Petersburg Paradox is deciding which resolution is the resolution. Many are practicality related--how often you can flip coin each day or what would happen to you in a real world casino if you successfully flipped a coin the same way thirty times in a row. I prefer correcting the clear flaw in the mathematical reasoning. Each term in the series represents a stage in the betting. At each stage the bet is doubled. The exponentially growing bet soon exceeds any reasonable or even imaginable bankroll. At that point the series must stop and the expected value of the is the sum up to that point, which is finite, indeed modest. Looked at this way the problem is essentially identical to the Martingale (betting system), which is widely considered an example of mathematical foolishness. That the same opprobrium is not applied to the St. Petersburg Game is the true paradox.--agr (talk) 02:08, 24 September 2009 (UTC)
standard deviation
An often used measure of risk is the Standard deviation.
Example no. 1: An investment of $5 has mean expected return $5.5 (gain 10%) and standard deviation = $3. This means 68.2% probability of actual return anywhere between $2.5 and $8.5, that is, ±1σ variance around the mean.
Example no. 2: An investment of $5 has mean expected return $5.5 (gain 10%) and standard deviation = $1. This is the same as above mean expected return but smaller risk (smaller deviation). Which means actual return somewhere between $4.5 and $6.5 with probability 68.2%.
A rational agent would prefer investment no.2 as less risky, and I think this is standard textbook approach to financial risk, so it could be incorporated into the solution of the paradox (If not allready mentioned in the article, maybe it is mentioned somewhere, I am not sure. But i see that also the financial risk article does not cover the subject in detail as far as calculation formulas goes and the deviation as a measure for risk).--Vanakaris (talk) 08:46, 20 July 2010 (UTC)
- Standard deviation is useful in comparing two investments with similar expected returns, but I'm not sure how to use it here. We have only one investment, a ticket to play the St. Petersburg game and we want to know what it is worth. Also when it comes to lottery tickets, high standard deviation seems to have a positive value. People line up to buy tickets for multimillion dollar jackpots, even when the odds against them are astronomical. But again there is no real paradox to solve here. The St. Petersburg game has a finite value for any reasonable value of the casino's bankroll. It only appears paradoxical to people who know enough math to sum an infinite series but not enough to do it properly.--agr (talk) 15:51, 20 July 2010 (UTC)
It certainly is interesting problem, but no real paradox, I agree. Now my view is that it has at least two separate points that seem paradoxical (although in fact they are not really). One is the infinity thing. That solved (in the students' minds) we go to the next. Which is the rationality choice thing.
So, the solution to the infinity thing is, simply put, that "infinity" and "for every value we get, we can think of a bigger bankroll that yields a greater value" are by definition the same.
So the second apparent paradox is about the finite game (having the infinity thing put aside). It is the fact that, e.g. $4.28 for $100 bankroll is still too much, according to the intuition of most players. So my point is that this intuition is correct. And also maybe there is a more straightforward way to demonstrate it than marginal utility: very big standard deviation.
Note 1: Essentially I believe that what I'm talking about must be the same thing the paragraph St. Petersburg paradox#Probability weighting and paragraph St. Petersburg paradox#Rejection of mathematical expectation is talking about. Also, I quote from P.G. Keat, Managerial Economics, page 524: "The economic theory of marginal utility of money can also provide a way to incorporate risk in decision making ...". So maybe the marginal utility thing is also maybe the same thing essentially .. different wording .. different math formulation ... I guess ... I dont know!
Note 2: We can estimate deviation pretty easily for a finite case as, σ=sqrt[Σ(Ri-R)^2*pi], where R is mean value, Ri is each actual value and pi corresponding probability of this value. I'm working on it. Next we need to calculate what a fair value for entering the lottery would be based on a method for calculating value for risky investments based on std deviation.--Vanakaris (talk) 17:33, 20 July 2010 (UTC)
Now the problem is that the textbook I have available does not provide a formula for adjusting mean value based on deviation after all. So on second thought the right way to put it formally seems to be: it is a two dimensional problem. That is, we have pairs (mean, deviation). So any general solution would be a transformation (mathematical transformation) from every pair (mean, deviation) or (R, σ) to the "correct" pair (R_t, 0). Also this is kind of original research, so Ι cut it, sorry!--Vanakaris (talk) 18:44, 20 July 2010 (UTC)
- Take a look at Modern portfolio theory, which discusses some of the issues you raise. I think this discussion belongs there.--agr (talk) 19:29, 22 July 2010 (UTC)
Repeating history
I haven't been here for a few years, and now I am surprised to see that the discussion is still going on heavily. All old discussions have been dumped meanwhile, but it all reads familiar: some people don't understand where the paradox is at all, some think it isn't one, because for one or the other practical reason the lottery couldn't be played anyway etc. Even some familiar names can be found...
All this time scientists have basically long agreed on the expected utility theory being the best explanation for the St. Petersburg Paradox, since it does not depend on any particular detail of the St. Petersburg lottery (like its non-practicality), but also explains a whole lot of other decisions reasonably well - and on top of this can be derived from a small set of reasonable axioms on preferences.
It seems Wikipedia editors are - to put it nicely - more open-minded and do not want to accept that sometimes questions can be considered as settled... - Sorry, that is just my opinion. I hope not to upset anybody with this!
-84.58.232.22 (talk) 08:18, 8 May 2011 (UTC)
- To be fair it's a fairly ridiculous paradox anyway. Anyone on being asked to play the game wouldn't pay a "fair" price. With no minimum fixed entry price the best way to play is to pay $0.01 and make at least $0.99 profit each time you play. The casino has infinite resources anyway so you're not even being unfair. talk tospy on Kae 21:14, 29 May 2011 (UTC)
The standard explanation in economics for the failure of the "double your bet if you lose" gambling strategy involves the fact that the bettor has finite resources. The St. Petersburg game is mathematical the same system dressed in different verbiage. Why should it's explanation be any different? --agr (talk) 23:07, 29 May 2011 (UTC)
Politically correct gender pronouns
Why is it sexist if the player is a guy but not if she is a girl? It's clumsy to say "the player did x, the player did y, etc." but at least it would avoid the gender preferences. — Preceding unsigned comment added by 169.233.218.184 (talk) 23:54, 27 May 2012 (UTC)
- I have changed "she" to "the player".--agr (talk) 02:44, 29 May 2012 (UTC)
Not to beat a dead horse to death but...
In the interest in making the article more clear for some of us, perhaps special attention should be paid to resolving the issue of 'what if you win the first bet?' I wouldn't pay $1000.00 to play, because, regardless of the validity of the expected value over an infinite number of plays, I'd be afraid of winning on the first play. Then I'd be down $999. I think it's very difficult for some of us to imagine sitting there at the casino forever waiting to win an infinite amount of money. 71.139.179.199 (talk) 08:19, 3 January 2013 (UTC)
- I'm not sure what you mean by "win on the first bet." In my mind, winning a round means the coin flip is heads and the pot doubles. If you mean the first round is tails and you win only the initial pot of $1, you are correct, you would lose $999. There are plenty of popular gambling games where you can lose everything on the first round, e.g. roulette. Millions of people buy lottery tickets with an infinitesimal chance of winning. But your instincts here are valid. No casino imaginable has enough resources so that the St. Petersburg game would have an expected value anywhere near $1000.--agr (talk) 13:00, 3 January 2013 (UTC)
Peters 2011 debunking of Menger?
I haven't looked at this in detail, but I'm a little uneasy about this. The reference is to a paper that is unpublished, sitting on the arXiv. If we allowed every publication on the arXiv as a valid reference, the Riemann hypothesis page would be edited to say that it has been solved every other week. So, I would suggest that this section be removed until the result is referenced as coming from a journal. The whole paragraph comes off...awkwardly. It is the sort of thing that could easily have been inserted by a crank who wanted to promote his own works (not to bad mouth Peters, I have no idea who he is; for all I know he is a well respected mathematician).
--71.229.14.214 (talk) 00:35, 26 December 2012 (UTC)
- True, this view is not the currently accepted view of Menger's contribution to this debate at all. This section should be rewritten entirely or removed. iNic (talk) 11:57, 29 December 2012 (UTC)
- Peters' debunking of Menger (1934) paper was first presented in Peters (2011a), which *is* published in the academic literature (Phil. Trans. Roy. Soc., 369, 4913–4931). A more detailed exposition appeared in Peters (2011b) which, as pointed out above, is currently unpublished and available on arXiv. It was an oversight to cite only the 2011b paper in this context and not the published 2011a paper. Therefore, I propose to revert to the 29 December version and include citations to both of Peters' 2011 papers to make it clear that the errors in Menger (1934) have been exposed in a reputable academic journal.
- Atia2 (talk) 12:45, 11 January 2013 (UTC)
- Well, it's unfortunately not the case that one publication in a reputable academic journal always convinces every scholar in the field that the view proposed is the correct one. On the contrary, the overwhelming consensus over the years that Mengers arguments are clear and valid is something Peters points out himself. Peters advocates a totally new interpretation of the paradox, as well as of Menger, and so far I haven't seen a single scholar in support of his new ideas. So in terms of current support it's hard to find a more stark contrast than that between Menger and Peters. Anyway, I think it's a good idea to include a remark in the article that there is one current author that is a critic of the otherwise consensus view regarding Menger, and give a reference to his papers for interested readers. iNic (talk) 15:09, 11 January 2013 (UTC)
Iterated St. Petersburg lottery - looks like original "research" to me
The article contains the phrase -
Since E1 is infinite, En is infinite as well. Nevertheless, expressing En in this way shows that n, the number of times the game is played, makes a finite contribution to the average per-game payout.
This is ridiculous on it's face. The equation
En = E1 + JUST_ABOUT_ANYTHING
is exactly as valid as the equation in the article. Infinities cannot be manipulated like finite numbers.
In my opinion, this whole section should be removed.
Dr Smith (talk) 03:58, 6 October 2012 (UTC)
- I completely agree. Time for deletion. I've had a fact tag on this section for quite a while but no one is willing to defend it by giving any references. I also agree that the reasoning in this section looks like nonsense. However, that in itself is not a valid reason for deletion. Actually, some correctly published ideas in this area are even more nonsensical and ridiculous, and they can't be deleted from the article. iNic (talk) 00:41, 10 October 2012 (UTC)
- The main problem with the deleted section was its assertion that the number of games played makes a finite contribution to the expected payout per game. Instead it should have conveyed something subtly different: that for any given confidence level (e.g. 1/4), the number of games played makes a finite contribution to the minimum payout associated with that confidence level.
- So for example, in a series of 4 games there is a 1/4 chance of the average payout per game exceeding $3, whereas in a series of 1,024 games there is a 1/4 chance of the average payout per game exceeding $7. Whatever average payout per game you are targeting, the likelihood of achieving it increases the more games you play - but (and this is a crucial point) even if the number of games is increased to an astronomically large number, the likelihood of anything more than a modest average payout per game is still very low.
- I've added a new section "Expected payout per game in a series of games" which covers all this, complete with citation (and without billing itself as a resolution to the paradox).
- I haven't included a graph from a simulation of a large series of games, but personally I think such a graph would be useful and interesting. It would give a sense of the likely (slow) rate of rise in average payout per game from a real series of games. --Nabav (talk) 13:47, 12 December 2012 (UTC)
- Removed your section once again. Please discuss your planned contributions here at the talk page first before you enter anything again. iNic (talk) 12:38, 29 December 2012 (UTC)
- Excuse me, have I missed something? Has Wikipedia issued a new edict stating that all contributions must now be vetted beforehand on the Talk page by gatekeepers such as iNic before being allowed in?
- No, they haven't. Neither I nor anyone else needs to "discuss their planned contributions" with you before applying them to the article. Thus Wikipedia's exhortation to "be bold". And while you are equally within your rights to delete what others have written, I note that you haven't seen fit to provide any reason whatsoever for this particular deletion - which happens to relate to material backed by citation.
- This deletion of material backed by citation is especially puzzling given your own stated belief (further above in this thread) that published ideas "can't be deleted from the article". --Nabav (talk) 21:08, 3 January 2013 (UTC)
- Yes, you have missed that edit warring is something that should be avoided. One way to avoid that is to use the bold, revert, discuss cycle. This is what I suggest above.
- A WP article shall in the first place go to secondary sources to find which ideas has become standard or most accepted. Only if that is impossible for some reason should original research articles be considered as the source for the main text. But that is not the case here. This paradox is 300 years old and many papers and books have been written about it over the centuries, so we know for sure which ideas that have become the accepted ones in this case. Your section did not describe one of these.
- However, what we could do is to add a section at the end of the article pointing out that new original ideas for solutions are still published to this day, and in that context mention the paper you refer to from 2007. As well as others, like the papers by Ole Peters from 2010 and 2011 promoting yet another entirely different original solution. iNic (talk)
- iNic, this is a misunderstanding.
- The new section does not present any ideas, original or not, for a solution. It doesn't say anything whatsoever about solutions.
- That's why I did not place it within the section entitled "Solutions of the paradox".
- I suspect you may have only glanced briefly at my new section, and erroneously concluded that it resulted from me reverting your deletion. Thus your talk of edit warring etc. I didn't revert your deletion, thus all talk of edit warring is misplaced.
- Rather I spent several hours writing a new section, which I don't believe you have any reason to object to given that ...
- 1) It doesn't present a solution, much less a solution arising from original research
- 2) Its key points are backed by citation
- 3) It doesn't contain any vague, indefensible talk of "finite contributions", the way the old section did
- What does it do? It makes a single, uncontroversial point (with a citation from Pianca) - namely "If 2^k games are played, the average payout per game has a 1/2 chance of being [k/2 + 1] dollars or less, a 3/4 chance of being [k/2 + 2] dollars or less, a 7/8 chance of being [k/2 + 4] dollars or less, and so on" - and then it elaborates on this point. And I think it's an interesting point. It gives a sense of what is overwhelmingly likely to be true of the average payout per game even when the number of games played far exceeds practical limits.
- What's wrong with that? Nothing, as far as I can see.
- Given the above, I propose putting my new section in as-is. If there are minor problems with it here and there, these can always be corrected by you or others. --Nabav (talk) 13:36, 4 January 2013 (UTC)
- Which secondary sources do you have for the claim that this is an interesting or valid observation? In other words, has Pianca made it to the history books? Is his analysis accepted by others? I can't see that it is. On the contrary I would say his analysis is very controversial and new, as witnessed by the last sentence of his paper: "Overall, the theoretical EMV of the St. Petersburg game is infinite as held by the traditional view, but this can only be achieved if the game is in fact played an infinite number of times, a practical impossibility." iNic (talk) 17:27, 4 January 2013 (UTC)
- Here, again, is the claim the section is making: "If 2^k games are played, the average payout per game has a 1/2 chance of being [k/2 + 1] dollars or less, a 3/4 chance of being [k/2 + 2] dollars or less, a 7/8 chance of being [k/2 + 4] dollars or less, and so on". Could we please confine ourselves to this claim? Whatever other claims Pianca might make in his paper are not relevant to this discussion. (I can't resist pointing out, however, that the last sentence of Pianca's paper is not in the least bit controversial. He is simply saying that if you play the game a finite number of times, your winnings will be finite, not infinite. This is trivially true because when any given game pays out, it pays out a finite amount. Infinite EMV does not mean infinite winnings for a person playing the game a finite number of times.)
- Now let's split your question about secondary sources into two parts:
- 1) "Q. Which secondary sources do you have for the claim that this is an interesting observation?"
- A. There is no such thing as an objectively interesting observation. What is highly interesting to one person may be only marginally interesting to another, and totally uninteresting to a third person. In any case, when it comes to Wikipedia articles what is clearly important is to strive to be maximally interesting to the sorts of people who read Wikipedia articles, lay readers included. You can't expect secondary sources to tell you what's likely to be interesting in this sense, so I can't make any sense of this demand for what you term "secondary sources ... for the claim that this is an interesting observation".
- 2) "Q. Which secondary sources do you have for the claim that this is an valid observation?"
- A) Pianca. Plus, if you'd like, I can cite another fellow who Pianca in turn cites in the relevant section of his paper. Excuse me but I really don't see what the problem is here. Are you actually casting doubt on the claim that "If 2^k games are played, the average payout per game has a 1/2 chance of being [k/2 + 1] dollars or less, a 3/4 chance of being [k/2 + 2] dollars or less, a 7/8 chance of being [k/2 + 4] dollars or less, and so on"? Has the new section got those probabilities or those amounts wrong? I really think not, and Pianca thinks not, and this other source thinks not. The proposition is true, it's sourced, and within the new section the simple mathematics is laid bare for all to see. Job done.
- Now let's add it. Like I say, if you have any reservations about individual details within the new section, you can always tweak them. --Nabav (talk) 21:53, 4 January 2013 (UTC)
- No, the 2007 paper by Pianca is not a secondary source. It is original research. As I've said, I agree that it's interesting for the general reader to know that the debate about this old problem is still going on by some odd fellows. But our job is not to lie. And to pretend that brand new ideas are part of the well known history of this paradox is to lie. Pianca's view that the expected value of the game is only infinite if played an infinite number of times is both completely new and completely wrong. The expected value is infinite each time you play, and that is of course true also if you play repeatedly. You say that you want to add some intermediate reasoning Pianca has in his paper but not the point he want to make, which makes no sense at all. If we would refer to the Pianca 2007 paper at all we must of course be faithful to what he actually claims, and not pretend that he says something else. iNic (talk) 02:36, 5 January 2013 (UTC)
- I have rearranged the page now so that you can add the views of Pianca under "Recent discussions". iNic (talk) 03:44, 5 January 2013 (UTC)
- Why can I not get a simple answer from you as to whether or not you have a problem with the simple claim the new section is making, namely: "If 2^k games are played, the average payout per game has a 1/2 chance of being [k/2 + 1] dollars or less, a 3/4 chance of being [k/2 + 2] dollars or less, a 7/8 chance of being [k/2 + 4] dollars or less, and so on"? Are you suggesting that's not correct?
- Please answer the question.
- It's getting tiresome to reiterate that this is not some sort of radical, wacky claim. The claim is straightforward. The math is simple. The claim is sourced. You don't like the source? Well, that's bizarre, but fine: I'll cite a different source who says the same thing.
- Not that I should have even have to resort to switching the source. I'm well within my rights to cite Pianca as asserting "If 2^k games are played, the average payout per game has a 1/2 chance of [etc.]" Why? Because he does in fact assert it. Now you complain: Well, yes, he asserts it, but only as part of "some intermediate reasoning" (whatever that means). Can you please show me the Wikipedia guideline that says we're not allowed to cite assertions that are made in the context of "intermediate reasoning"?
- And then you complain that I'm effectively "lying" in my new section because I bring in only the assertion that Pianca makes in his so-called "intermediate reasoning"; I don't bring in the real "point he wants to make". I'm "pretending" he said something other than what he said. Sorry, what? Let me get this straight. Pianca's paper, like any paper, contains lots of assertions. I use one of those assertions, a totally unobjectionable one, in a citation. You then protest: No, you can't do that, because elsewhere Pianca makes a further point, and that further point is his real point, the point he really wants to make. You're being dishonest by not quoting that point. Can you please show me the Wikipedia guideline that says we're not allowed to have free reign in choosing which statements from our sources to bring into our articles and which statements not to bring in?
- You discover one sentence - one sentence! - in Pianca's paper that you find objectionable, and you decide that it means every other statement in the paper is thereby rendered unsuitable for citation. Don't you realise how crazy that is? And it's not even as if the sentence IS objectionable! The operative word in the sentence is "achieved". He's just spent the whole of the last part of the paper discussing the respective likelihoods of achieving different payout-per-game ranges in practice, when the game is actually played in simulations. And he's saying, in effect, Yes, the expected value of the game is infinity, but a person cannot actually, in practice, ACHIEVE a payout-per-game of infinity so long as the person plays the game only a finite number of times. To ACHIEVE a payout-per-game equal to the expected value of the game (infinity), a person would have to play the game an infinite number of times. The point is so simple, so trivial, that I cannot believe I'm actually having to explain it.
- Another way of expressing the above is by saying: suppose we simulate Petersburg gameplays on a computer. After a finite number of gameplays, can we expect that the resulting cumulative winnings divided by the number of games played will be equal to the theoretical expected value of the game (i.e. infinity)? Answer: of course we can't. That's why Pianca says in his abstract: "the theoretical and simulated outcomes are in agreement only if the game is played an infinite number of times, a practical impossibility." He's not DENYING that the theoretical expected value of the game is infinity. He's just saying: you're going to be disappointed if you imagine you're going to win infinity dollars after playing the game a finite number of times. Surely you don't disagree???!!!
- I think I know the answer to that already. Probably, none of what I've said in the above two paragraphs is going to make any impression on you. You're going to persist in misconstruing the sentence in question as being some sort of radical new claim. Sigh. Fine, I'll use a different source. Okay? --Nabav (talk) 14:46, 5 January 2013 (UTC)
- Yes please, provide a source from the well known history of this puzzle that claims that the type of consideration you do in your section is important for a suggested historical solution. All you do in your section is to list the expected value for each outcome of a series of k games. The sum of these outcomes is infinite as well, of course. That the actual outcome of any number of games is finite is also totally obvious. This discrepancy is the very problem this article is all about. No need to restate the basic problem once again, but now in a roundabout manner. It is already stated clearly in the beginning of the article. If Pianca's new ideas are irrelevant to you, as you claim, I don't understand what you are trying to say. Is this passage in Pianca 2007 simply something that you find very interesting personally? If so, why do you feel the need to put it up on a wikipedia page? iNic (talk) 03:48, 6 January 2013 (UTC)
- You say my new section does nothing more than "list the expected value for each outcome of a series of k games". Um ... no, that's not the case. Have you actually read the new section?
- Let's try again. The assertion is: "If 2^k games are played, the average payout per game has a 1/2 chance of being [k/2 + 1] dollars or less, a 3/4 chance of being [k/2 + 2] dollars or less, a 7/8 chance of being [k/2 + 4] dollars or less, and so on."
- Those fractiony things are probabilities. You seem to have missed them. The assertion is giving the respective probabilities of different payout-value ranges when 2^k games are played.
- So, when you say "All you do in your section is to list the expected value for each outcome" ... you're wrong. I'm giving the respective probabilities of different payout-value ranges when 2^k games are played.
- When you say I'm doing nothing but "restating the basic problem once again, but now in a roundabout manner", you're wrong. I'm giving the respective probabilities of different payout-value ranges when 2^k games are played.
- When you say you "don't understand what [I'm] trying to say", I can only repeat, very loudly:
- I'M GIVING THE RESPECTIVE PROBABILITIES OF DIFFERENT PAYOUT-VALUE RANGES WHEN 2^K GAMES ARE PLAYED.
- And you know what? A statement of the respective probabilities of different payout ranges when the game is actually played is interesting. It's interesting to learn that you can play the game as many times as there are atoms in the universe and still your average payout is overwhelmingly likely to be peanuts. It's interesting to learn you can be highly confident that when you play the game repeatedly, the average payout-per-game will creep up in a roughly logarithmic fashion.
- Why do you think people write computer simulations of the Petersburg lottery? Because they're curious to know what actually happens, what winnings are actually achieved, when the game is played a large number of times. They find it interesting to discover that the results show a logarithmic increase in average winnings as more and more games are played, with average winnings at modest levels even after huge numbers of gameplays, exactly as my "If 2^k games are played..." assertion predicts. Go into Google and type: "petersburg paradox" "simulation". You'll see from the results that I'm not the only person who is interested in what actually happens over any given number of gameplays of the Petersburg lottery. Not by a long shot.
- You instruct me to "provide a source from the well known history of this puzzle that claims that the type of consideration you do in your section is important for a suggested historical solution". I will not, I need not, because - for the last time - I'M NOT CLAIMING A SOLUTION TO THE PARADOX. I'm describing - in detail, complete with the actual probabilities - what a person can expect to win over any given number of Petersburg gameplays. It's contextual information. You don't think a discussion of the average payout levels a person can expect to achieve when engaged in repeated Petersburg gameplays is even remotely interesting or relevant to a discussion of the Petersburg paradox? Tough. Lots of other people do find it relevant and interesting, as per the above.
- All this being the case, don't tell me I can't put the ""If 2^k games are played..." material up on a Wikipedia page. The material is valid, it's sourced, and it answers to a genuine interest and curiosity among more people than just me. End of story. --Nabav (talk) 13:56, 6 January 2013 (UTC)
- The problem with your contribution is that you present this as piece of calculation as if it were obvious and true, when in fact it's not. Sure, I'm the first one to agree that it's interesting that the increase of winnings when repeatedly playing the game seem to follow some law. However, different writers have explained this in different mathematical and conceptual ways and drawn different conclusions from it. Feller redefined 'fair value' 1936, and based on this definition a logarithmic increase of expected winnings popped out. Buffon didn't have access to computers 1777 so he used a child instead to make practical simulations. Based on his experiments he derived a special utility function to describe the increased expected utility when playing repeatedly. Now Vivian and Pianca uses a method in the same spirit as some of Buffon's calculations to derive a relationship between expected gain and number of games played. Of the three explanations the one by Feller is the one that has been most endorsed by other writers, by far. But that doesn't mean that his solution/interpretation is the "true" one. Far from it. But it indicates that if the WP page would bring up this topic we can't omit the Feller account. Buffon can be mentioned as a historical curiosity and Vivian and Pianca can be mentioned as recent authors that partly follow in the footsteps of Buffon. iNic (talk) 02:54, 8 January 2013 (UTC)
- iNic, thanks for your latest reply and thanks for approaching this in a constructive spirit. I partly agree and partly disagree with what you've said.
- First, the parts I disagree with:
- (1)
- Clearly the assertion "If 2^k games are played, the average payout per game has a 1/2 chance of being [k/2 + 1] dollars or less, a 3/4 chance of being [k/2 + 2] dollars or less, a 7/8 chance of being [k/2 + 4] dollars or less, and so on" is either true or false. And the calculations that my new section (and its source) use to support the assertion are either valid or flawed.
- When you say "The problem with your contribution is that you present this as piece of calculation as if it were obvious and true, when in fact it's not", this would appear to suggest some level of suspicion on your part that the assertion "If 2^k games are played, the average payout per game has a 1/2 chance of being [k/2 + 1] dollars or less, a 3/4 chance of being [k/2 + 2] dollars or less, a 7/8 chance of being [k/2 + 4] dollars or less, and so on" might be false (which would mean the calculations my new section uses to support it must be flawed).
- Again: the reality is one or the other. Either the assertion is true or it's false. If it's false, it certainly doesn't deserve a place in this article. If it's true, it does. I'm convinced it's true; you're apparently not. So: could you tell me where the misstep or missteps occur in the calculation given in the new section? Further, if the calculation is wrong and hence the payout-ranges and probabilities given are incorrect, it raises the question: what are the correct payout-ranges and probabilities?
- Well, it's not hard to see that this calculation is nothing more than an heuristic, at most. But luckily for you, Wikipedia is not about truth and falsity at all. It's all about verifiability. Please read the wikipedia guidelines if you haven't already. So "truth" is not what we talk about here. What we discuss is how to present recent ideas in this field in a good and neutral way. The key word here is neutral. iNic (talk) 12:06, 11 January 2013 (UTC)
- (2)
- Feller redefined 'fair value' 1936, and based on this definition a logarithmic increase of expected winnings popped out. I think you mean: based on this definition a logarithmic increase of entry fees popped out. In other words he concluded that a variable entry fee, with the size of the fee increasing logarithmically with the number of trials, would ensure fairness in his sense.
- No, his entry fees are calculated from his law by dividing by n, the number of games played. His law is logarithmic: n log n. iNic (talk) 12:06, 11 January 2013 (UTC)
- Feller isn't saying: Hey everyone, have you ever noticed the curious phenomenon whereby Petersburg average winnings seem to grow logarithmically with the number of gameplays? Well, I've got an explanation for why that is. He's doing something considerably more ambitious: identifying a pattern of entry fees that ensures fairness according to his definition of fairness. In this regard he's seeking to resolve the paradox. My section in contrast is very modest in its goal and certainly doesn't compete with past attempts at resolving the paradox, indeed it doesn't try to resolve the paradox at all. Rather it simply informs the reader of the respective likelihoods of different payout ranges being attained.
- I don't think his ability to divide by n is considered to be so much extra ambitious. However, he has another way to derive a general law than the heuristic you like, which must be mentioned in the article in case the Buffon heuristic is mentioned. BTW, Buffon never used this calculation as anything but a simple heuristic. When supporting his view he used other arguments. iNic (talk) 12:06, 11 January 2013 (UTC)
- The assertion made in my new section can be endorsed without thereby endorsing (or otherwise) Feller's resolution to the paradox. My new section doesn't give the reader any basis for knowing how to feel about Feller's resolution.
- We are not dealing with feelings here. What we need to do is to represent the different ideas within this sub-topic of the st petersburg paradox in a fair way. Fair here means in a neutral way with regards to the sources. If a few ideas are to be presented only the most significant ("truth" is irrelevant) should be included. If a less significant idea should be included, like your favorite heuristic, the more important ideas must be included as well as other equally important ideas as the heuristic. iNic (talk) 12:06, 11 January 2013 (UTC)
- (3)
- Buffon didn't have access to computers 1777 so he used a child instead to make practical simulations. Based on his experiments he derived a special utility function to describe the increased expected utility when playing repeatedly.
- So Buffon was engaged in a variety of different types of activities including: a) Trying to get a handle on what levels of winnings would occur when the game was actually played repeatedly - this he did using the experiments you mentioned. b) Working out the mathematical basis of what he discovered in his experiments. c) Proposing possible resolutions to the paradox.
- The assertion made in my new section can be endorsed without thereby endorsing (or otherwise) anything Buffon said that comes under point (c) above.
- What is legitimate is to ask whether the assertion made in my new section is consistent with those of Buffon's conclusions that come under point (b). I think probably it is. Will have to check.
- ---
- Now, as to what I agree with:
- a) Buffon can be mentioned as a historical curiosity and Vivian and Pianca can be mentioned as recent authors that partly follow in the footsteps of Buffon. I'm happy with that. One thing they all have in common is that they address the question I'm interested in, i.e: what can you expect to win over repeated gameplays?
- b) Feller deserves a place in the article. Note however that the reason I think he deserves a place is different from the reason I believe my new section deserves a place. Feller deserves a place because he gave a compelling description of a possible resolution to the paradox. --Nabav (talk) 18:34, 8 January 2013 (UTC)
- There are other writers and ideas that need to be mentioned as well in this context. Some concerns fractals and chaos, and others continued fraction coefficients, for example. iNic (talk) 12:06, 11 January 2013 (UTC)
- I grow weary of this. I'd thought for one brief shining moment that you were starting to be more constructive, but this I now realise was wishful thinking. Wikipedia is, you declare, "not about truth and falsity at all". Really? Not at all? Not even one little bit?
- All sloganeering aside, there is an inescapable sense in which Wikipedia is about truth. A Wikipedia article is, after all, a series of statements. Here is an example of a statement, taken from today's feature article: "Cracker Barrel sells gift items including toys and woodcrafts." Do you see what the authors did there? They stuck their necks out and affirmed the truth - yes, truth - of the proposition that Cracker Barrel sells gift items including toys and woodcrafts. Naturally, we expect a citation, and they give us one. But what they don't do is express themselves this way: "In November 1995, journalist Richard L. Papiernik put forward the view that Cracker Barrel sells gift items including toys and woodcrafts."
- For God's sake, man, not every assertion has to be explicitly cast as being merely the point of view of a particular person or group of people, merely a view to be set alongside "rival" or "alternative" views. Not everything is a controversy. I can see how insistent you are on treating everything including simple mathematics as a controversy, come what may. To me that's folly, and I have better things to do with my time than carry on an unproductive discussion of this kind, so I'm done here. All yours; good luck, and I mean that genuinely. --Nabav (talk) 00:32, 12 January 2013 (UTC)
- Well, if Wikipedia would be about truth, as you insist it ought to be, your section must for sure be deleted. As I told you before it's to your advantage that Wikipedia is not about truth, but only verifiability. The calculations you do in your section simply doesn't make sense mathematically. If 1024 games are played (k=10) you claim that exactly 512 of these will give 1$ in return, and so on. This is of course not true. It is very unlikely that this will happen, and it gets more and more unlikely the larger k is. It is true that in the limit the average proportion will approach the probability for each outcome. But if this is how you think about it you are not playing 1024 games anymore, but infinitely many games. And then the whole idea of explaining what happens when playing a small, finite number of gameplays is totally lost. iNic (talk) 01:57, 18 January 2013 (UTC)
- Maybe I didn't make myself clear. "I'm done" means I'm not available to be argued with anymore. If you disagree with the mathematical assertion made by Vivian and Pianca, take it up with Vivian and Pianca, not with me. That can be a little project for you. Another little project for you could be to search for the text "exactly 512 games will pay $1" in the section I wrote. (Hint: it doesn't exist.) Another little project for you could be to search for the following text in the section I wrote: "consider a series of 1,024 games. On average: 512 games will pay $1, [etc.]" (Hint: it's right near the beginning.) A final project for you could be to get yourself a coin, play a series of 1024 Petersburg games, calculate the average per-game winnings you achieved in this series, play a second series of 1024 Petersburg games, calculate the average per-game winnings you achieved in the second series, and repeat this process a billion times - or however long it takes you to realise that the ratio of [the number of series in which you earned more than $6 per game] to [the number of series in which you earned <= $6 per game] has been getting progressively closer to 1:1. And if that realisation makes you uncomfortable, don't take it up with me. Take it up with a higher authority, because I didn't make the laws of probability. Nor is my job to spend endless hours trying to convince the unconvincible. --Nabav (talk) 23:32, 18 January 2013 (UTC)
- No, you don't have to convince me that anything of what you believe is true. Wikipedia is not about truth at all, as I think I've told you before. So please just relax. iNic (talk) 01:02, 20 January 2013 (UTC)
- The number of errors you have made here approaches infinity. -- 96.247.231.243 (talk) 10:15, 22 March 2014 (UTC)
Math error in finite lotteries section
I corrected the math here. While the table effectively illustrated the logarithmic nature of the problem, the expected values were almost half what they should be.
This is most obvious for small W. Consider a lottery with total resources W = 2 dollars. According to the equation as it was written:
L = 1 + floor(log2(W)) = 1 + floor(1) = 2
E = L/2 + W/(2^L) = 2/2 + 2/(2^2) = 1 + 0.5 = 1.5
But how can E=1.5 possibly be right? If you lose on the very first toss, you still get two dollars. Thus for any E<2, it must follow that the house pays LESS if you win.
One might say that the first part, 1/2 * 2, represents the first dollar of expected value. 1/4 * 4 represents the second dollar of expected value, etc. Thus the expected value should always be greater than or equal to floor(log2(W))+1.
For the "friendly game" with 100 dollar bankroll, we will get to 1/64 * 64, the sixth dollar of expected value before the casino runs into trouble with 1/64 * 100, a seventh dollar and change. The expected value will thus be 7.5625 rather than the 4.28 that was written before. — Preceding unsigned comment added by 116.25.204.211 (talk) 08:07, 27 July 2014 (UTC)
New section "Vivian"
The new recent developments subsection "Vivian" makes no sense and is sourced to a primary source in an obscure journal. So I'm reverting it. Loraof (talk) 01:12, 1 May 2016 (UTC)