Talk:Two envelopes problem/Arguments/Archive 1
This is an archive of past discussions about Two envelopes problem. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | Archive 2 | Archive 3 | → | Archive 5 |
Simple Solution to Hardest Problem
"Let the amount in the envelope you chose be A. Then by swapping, if you gain you gain A but if you lose you lose A/2. So the amount you might gain is strictly greater than the amount you might lose."
Yes, but if you swap again, as it's the FIRST envelope that has A dollars, then by switching again, you either gain A/2 or lose A. Therefore, you can't switch forever and gain forever. It operates under the false assumpation that whatever envelope you have has A, but that A is also a constant. You can't have both.
-t3h 1337 r0XX0r
Oh, just noticed it's irrelevance. Sorry, won't do this again... 1337 r0XX0r 15:39, 19 January 2006 (UTC)
Its irrelevance? I'm confused. The solution to the hardest problem seems pretty simple... you are switching the amount in the envelope depending on if you look at it like it has more, or less, from Y to 2Y. You could do the same thing with Y (you gain a Y if the other envelope has more) and 100Y (you lose 50 Y if your envelope had the most). ~Tever
- "To be rational I will thus end up swapping envelopes indefinitely" - This sentence is clearly false. After the first swap, I already know the contents of both the envelopes, and were I allowed a second swap, the decision whether on not to do it is simple. - Mike Rosoft 23:42, 24 March 2006 (UTC)
No, you never look into any envelopes in the original statement of the problem. INic 21:14, 27 March 2006 (UTC)
[[ The solution is simple. Everyone is trying to discern the probability of getting the greater amount, thus gaining the most out of the problem. However, if you were to not open either one, and continiously follow the pattern of swapping between envelopes, you would thus gain nothing. So, in order to gain the most, one would have to be content upon gaining at all, thus gaining something, and thus gaining the most out of the probablity, for without taking one, you gain nothing. ]] - Apocalyptic_Kisses, April 6th, 2006
- You are right. But what rule of decision theory do you suggest should be added to allow for this behaviour? It seems to me to be a meta-rule rather than an ordinary rule. Something like "When decision theory leads to absurd results please abandon decision theory!" INic 17:43, 18 April 2006 (UTC)
I may have missed something here, but my own analysis is based on the fact that money exists in units and A or whatever the amount is in the envelope is therefore a discrete variable. Any amount which is an odd number of units can only be doubled, which means that for any reasonable range on monetary amounts, amounts with even numbers of units are more likely to be halved than doubled. If fact if you assume that there is a limit on the amount of cash available, any large even number can only be halved, and there are no large odd numbers. This means that across the distribution the losses from halving come out equal to the gains from doubling - because the larger numbers cannot be doubled. If you were to open the first envelope (which you might expect to tell you nothing), you would immediately swap any odd amount and stick with an even amount unless you were sufficiently confident it was a small amount. (Mark Bennet 26 Nov 07) —Preceding unsigned comment added by MDBennet (talk • contribs) 21:35, 26 November 2007 (UTC)
- That money comes in discrete units is irrelevant to the problem itself as it is easy to restate the problem using a continuous reward. We can put gold instead of money in the envelopes for example. As gold doesn't come in discrete units (except at atomic levels but then you can't see it anyway) your reasoning based on odd/even amounts doesn't help us at all. We can assume that there is a limit to the amount of money/gold available (even though I don't know what the upper limit would be), but it's irrelevant to this problem too as the subject opening the first envelope doesn't know the limit anyway. It is also possible to restate the problem in such a way that no limit exists, so any solution based on a limit assumption will fail to shed light upon the real cause of the problem. iNic (talk) 23:33, 28 November 2007 (UTC)
What's the problem?
Can someone explain why this is such a "paradox"? It seems to me to be so much mathematical sleight of hand. Representing the payoffs as Y and 2Y or 2A and A/2 are all just misdirection. Just being allowed to play the game means you're getting paid Y. The decision involved is all about the other chunk. So you've got 1/2 chance of getting it right the first time and 1/2 of getting it wrong. Switching envelopes doesn't change that... 1/2*0 + 1/2*Y = Y/2, which, added to the guaranteed payoff of Y gives 3Y/2, which is the expectation that we had before we started playing. 68.83.216.237 03:59, 30 May 2006 (UTC)
- The problem is not to find another way to calculate that doesn't lead to contradictions (that is easy), but to pinpoint the erroneous step in the presented reasoning leading to the contradiction. That includes to be able to say exactly why that step is not correct, and under what conditions it's not correct, so we can be absolutely sure we don't make this mistake in a more complicated situation where the fact that it's wrong isn't this obvious. So far none have managed to give an explanation that others haven't objected to. That some of the explanations are very mathematical in nature might indicate that at least some think that this is a subtle problem in need of a lot of mathematics to be fully understood. You are, of course, free to disagree! INic 21:17, 30 July 2006 (UTC)
Spot the flaw? Okay.
- Denote by A the amount in the selected envelope.
- The probability that A is the smaller amount is 1/2, and that it's the larger also 1/2
And there's the flaw.
The 50/50 pick isn't between a doubled value and the halved value. There are four situations here not two. Either we have an initial condition of A & 2A and you picked A with a 50% chance, or we had an initial condition A & 0.5A and you picked A with a 50% chance. But you have no way of knowing the relative probabilities of the two initial conditions: between A/2A and A/0.5A, so you can't calculate the combined expectation value.
Jon Hurwitz 212.159.88.189 (talk) 13:33, 3 April 2009 (UTC)
The problem of measuring gain/loss
I suggest that the problem exists with the fact that the "gain" or "loss" of the envelope holder is being considered. Assuming A is the smaller amount in one of the envelopes, then the average amount of money to be holding after picking the first envelope at random is:
Contrarily, assuming A is the larger amount in one of the envelopes, the average amount of money to be holding after picking the first envelope at random is:
In both cases, after having picked the first envelope, the envelope holder is unable to determine whether the amount in his/her envelope is the greater or lesser amount. If he/she were to assume that he/she had the average amount, in both cases, he/she would realise that he/she had the same amount to gain/lose in switching envelopes ( and respectively).
--TechnocratiK 16:56, 22 September 2006 (UTC)
Another proposed solution
Motion to include this in the main article... its proposed by me. All in favour say "I"... seriously though, please give me feedback (rmessenger@gmail.com if you like), and please read the whole thing:
First, take A to be the quantity of money in the envelope chosen first, and B to be that of the other envelope. Then take A and B to be multiples of a quantity Q. Even if the quantity in A is known, Q is not. In this situation, there are two possible outcomes:
(I) A = Q, B = 2Q
(II) A = 2Q, B = Q
The question is whether we would stand to benefit, on average, by taking the money in B instead of our random first choice A. Both situations (I) and (II) are equally likely. In situation (I), we would gain Q by choosing B. In situation (II), we would lose Q by choosing B. Thus, the average gain would be zero:
The average proportion of B to A is irrelevant in the determination of whether to choose A or B. The entire line of reasoning in the problem is a red herring. It is true that the average proportion of B to A is greater than one, but it does not follow from that determination that the better option is to choose B.
For Example: Assume one of the envelopes contains $100 and the other $50. The two possibilities are:
(I) A = $50, B = $100
(II) A = $100, B = $50
If you repeat the above event many, many times, each time recording the following:
then will approach , and both and will approach $75. The value of is totally irrelevant. What is relevant is that . This means that it makes no different which envelope you choose. On average, you will end up with the same amount of money, which is the expected result.
Step 8 assumes that because , one will gain on average by choosing B. This is simply and plainly false, and represents the source of the "paradox."
- I like your reasoning! The only reason I deleted your addition of it was that it was original research, and that is strictly forbidden. However, it's not forbidden here at the talk page (but not encouraged either). I think you should elaborate on your idea some and try to get it published somewhere. What I lack in your explanation right now is a clear statement of the scope of your reasoning; how can I know in some other context that the expected value is of no use? I don't think you say it's always of no use, right? And another thing that is interesting with your solution is that you say it's correct to calculate the expected value as in step 7, only that it's of no use. But if it's of no use, in what sense is it correct? iNic 22:46, 13 November 2006 (UTC)
- No, I was wrong about that. The expected value calculation is done wrong. This is because we are supposed to be comparing the expected values of two options. One option is to keep A, the other is to go with B. If we do two different EV calculations for both, we will see that we can expect the same value for each. Take Q to be the smallest value between the two envelopes. There are two equally likely possibilities for each option: we hold the greater amount, or we don't. If we hold the greater amount, we hold 2Q, if not, we hold Q; as such:
- They are equal, so we can expect the same amount of money regardless of whether we keep A or go with B. Amen.
- And, their equation for is wrong for one simple reason: the value of A is different in both of the two possibilities. As soon as the two envelopes are on the table, the value of Q never changes. Since we don't know Q from opening one envelope, A must take on two different values in the two terms of their EV equation. A is not a constant in the equation even if we know what it is. Here's their equation:
- In the first term, A=Q, in the second, A=2Q. Since Q is constant, A must change. An EV equation must be written in terms of constants. Since theirs isn't, it's wrong.
- Think of it this way: since there are two different possibilities, their are two different A's. You don't know which one you have:
- Possibility 1:
- Possibility 2:
- Possibility 1:
- So their EV equation would look like this:
- OK, so this means that you now agree with all other authors that step 7 and not step 8 is the erroneous step in the first case? And you seem to claim that step 7 is the culprit even in the next case where we look in the envelope, as you say that "A is not a constant in the equation even if we know what it is." Does this mean that you disagree with the common opinion that it's step 2 that is the erroneous step there? And what about the next variant? Is the expected value calculation not valid there either? And how can i know, in some other context, that a number, say 1, is not really a constant? You still have to specify how we can avoid this paradox in general. As long as you haven't generalized your idea to a general rule you haven't really proposed any new solution, I'm afraid. iNic 02:26, 15 November 2006 (UTC)
- Your right. But its simple: if you forget probability for a second, and imagine that as soon as the envelopes are on the table, you repeat the possible outcomes of the event millions of times. Each time the event takes place, there is some and which are the A and B values you got that time. These are different each time you do it. So as a simple rule, they can play no part in an expected value calculation; unless A actually was a constant. Here's where it gets interesting: It seems like the same situation, but its not. If we imagine A is a constant, and if the same event were repeated, A would not change, as is explained in steps 1-6. If you model the situation this way (I have done computer models for both), such that the experimenter, if you will (the computer), first chooses a value for A, then randomly selects a value for B as a function of A based on steps 1-6, then the expected value of B actually does end up equaling five fourths of the EV of A!
- If instead you tell the experimenter to first choose a value to be the smallest value, then choose which envelope to put it in, and put two times that in the other: The result is as you would expect for the real life scenario. So while the possibility of B>A is the same for each instance of both situations... and the two possible values of with respect to are the same, 'tis not the same situation.
- I can make it even clearer how there are two different situations being represented by separating out the probabilistic variable:
- Starting with the true event: the experimenter chooses a smallest value, call it Q, and puts it in one envelope, and puts 2Q in the other. Then, lets say you flip a coin: if it's heads, , if tails, for instance... now observe:
- As you can see, now we have an unchanging algebraic expression for all the variables. Its easy to see the results of the coin toss simply dictates which envelope you pick up. Please note that (referring to the last expression) is either (1/2)A or 2A with equal probability. Its also easy to write our expected value expression! There is only one (well understood) probabilistic event: a coin toss. We know the expected value of Z is exactly 1/2. Plug it in and we get what we would expect: , which holds up regardless of whether Q is the average of a constantly changing value, or simply a constant through all instances. NOW, the "OTHER EVENT":
- The experimenter first chooses a value for A, then chooses based on steps 1-6. If we accommodate the coin toss, we get:
- First, notice that if Z=0, , and if Z=1, , and since the probability of either is equal, we would expect that this is identical to the true event, but it's not! If we try to solve for the expected value of B, we plug in our known expected value for Z and what do we get? five fourths. What a surprise! So its easy to see that, while these too situations appear similar, the similarities break down when Z assumes value other than 0 or 1. It's like saying: I'm a function with roots at 2, 4 and 7.. what function am I? There can be many seemingly similar situations at first glance, with the primitive tools used to analyze this problem. But they behave differently.
- So the moral: If it changes from instance to instance AND its expected value isn't already known, it CAN'T be used as a term in an expected value calculation! Just as you'd expect! We can't even use to find the expected value of B! because if an EV equation was written in terms of a variable that changes all the time, then the expected value would always be changing!
- And specifically addressing the problem: Step one fine IF we remember that they have set A to equal the money in the envelope THIS TIME. Step 2 is testably TRUE. Steps 3-5 are stating the obvious. Step 6 is the problem. It is true within the parameters of step 1: that is, AT EACH INSTANCE, half of the time, B=2A, and the other half, B=(1/2)A. But this relates specifically to values of and . As I stated, if we want to calculate expected value in this type of situation, It must be written in terms of ALREADY KNOWN expected values! We don't already know the expected values of either A or B, hence we can't squeeze blood from a turnip!
- In the true situation, there are two truely random probabilistic variables: Q and Z. A and B are simply functions of Q and Z (Z being the binary decision, Q being the random number). In their situation, they took the random probabilistic variables to be A and Z, and let B be a function of those. This is different because it assumes that the probability distribution of A is totally random, when in fact it depends on that of the true independent variables (the ones that are actually chosen at random!!!) A was never chosen at random! only Q and Z are!
- When determining the expected value of a variable: Express the variable at any instance in terms of the random decisions or numbers that compose it. This equation will always hold true when relating the expected value of the variable to the known expected values of the random decisions or numbers.
- i.e. suppose I said I was giving away between 0-5 dollars to one of two people, AND between 10-15 dollars to the other. You're one of those people, and.. say, Joe is the other. so Y is the amount of money you just got, and J is the amount Joe got. Is it worth you're while to switch with Joe?
- First, identify the variables: i am choosing between two people, so there is one binary decision Z (independent). There is the amount of money between 0-5 dollars, call it q (independent), and the other quantity between 0-5 dollars (to be added to 10), call it Q (independent). Y and J are dependent upon these as such:
- And write expected value expressions as follow:
- We know As such:
- The point is, had you assumed your money was an independent variable and somehow tried to define poor Joe's money in a given instance in terms of your own, you would have gotten it wrong! Neither J nor Y are independent variables, hence we know nothing of their expected values, SO: if we had an expression for J in terms of Y, we wouldn't be able to fill in the expected value of Y, because its dependent! We would be forced to do something tragic.. assume Y is a constant, and define Joe's money in terms of this magical made up constant! Therein lies the mistake! Note: When I give you your money, you know . This doesn't mean you know anything about the infinitely many other possible values of Y!
Why is it that this thorough and indisputable resolution is not in the article? It just doesn't make sense that the original "problem", which is an insult to intelligence, has its own article, and the rational argument debunking the "problem" is disallowed from that article. Instead the article remains, an inane problem with a bunch of inane pseudo-solutions, while the real rational solution will not be submitted for reasons of academic bureaucracy. What happened to you Wikipedia, you used to be cool... Denito 10:28, 28 June 2007 (UTC)
History of the problem
Someone correct me if I'm wrong here, but in the original problem
- Two people, equally rich, meet to compare the contents of their wallets. Each is ignorant of the contents of the two wallets. The game is as follows: whoever has the least money receives the contents of the wallet of the other (in the case where the amounts are equal, nothing happens). One of the two men can reason: "Suppose that I have the amount A in my wallet. That's the maximum that I could lose. If I win (probability 0.5), the amount that I'll have in my possession at the end of the game will be more than 2A. Therefore the game is favourable to me." The other man can reason in exactly the same way. In fact, by symmetry, the game is fair. Where is the mistake in the reasoning of each man?
the mistake in reasoning is two-fold. The first mistake is that the probability of winning is not 0.5. If they both are equally rich, then they both have a total wealth of W. The amount in one wallet would be any amount A such that 0 <= A <= W. So the probablity that A is greater than the amount in the other wallet is A/W, which may or may not be 0.5.
Second, if A is greater than B, then A + B < 2A and the amount in his possession at the end of the game cannot be more than 2A.
Fryede 20:42, 11 December 2006 (UTC)
Three envelopes
I don't have a solution to propose but the formula (1/2 * 2A) + (1/2 * A/2) can also be interpreted as: there are three envelops, they contain the amount A/2, A and 2A, you know you are holding the envelop with the amount A, you are given the choice to keep it or swap with one of the other two envelops. So in this case if you choose to swap envelop, the average or expected value is indead 5/4A, but with this interpretation the above formula can be applied only for the first swap.H eristo 00:17, 5 March 2007 (UTC)
I agree with Heristo, in my opinion the paradox is originated because it consider only 2 envelopes, but the envelopes are 3 conteining:
X/2, X, 2X.
Only 2 are actual envelopes, and only the couples: {x/2, x} e {x, 2x}.
If we assume X = 100 e and we consider that we have the same probability to have one of the three envelopes (2 real and one virtual)
$ | Probability | Value x Prob. |
---|---|---|
50 | 1/3 | 16.66 |
100 | 1/3 | 33.33 |
200 | 1/3 | 66.66 |
If we have the first possible couple {X/2, X}:
We will have:
$ | Probability | Value x Prob. |
---|---|---|
50 | 1/3 | 16.66 |
100 | 1/3 | 33.33 |
Sum | 50 |
And will be indifferent to change envelop.
If we have the second possible couple {X, 2X}:
We will have:
$ | Probability | Value x Prob. |
---|---|---|
100 | 1/3 | 33.33 |
200 | 1/3 | 66.66 |
Sum | 100 |
And again will be indifferent to change envelop.
In all possible scenarios will be indifferent to change the envelop.
Salvacam 14:23, 24 April 2007 (UTC)
Comment
The fallacy or source of error may be in that the formula seems to allow 3 possible amounts for 2 envelopes. There is A, 2A and A/2.
I would suggest assigning only values A and 2A OR A and 0.5A. Using A and 2A: By selecting one envelope, I have a 50% chance of selecting amount A and 50% of selecting 2A. Without knowing what A may be, that gives me odds of winning 1.5A with my initial selection.
If I have selected the envelope holding A, then the other holds 2A. If I selected 2A, then the other holds A.
The two outcomes are equally likely, so the probability is that opposite envelope would contain:
0.5 * A + 0.5 * 2A, or 1.5A.
This is no more than what I am holding. I have no incentive to switch.
Srobidoux 18:19, 13 May 2007 (UTC) srobidoux
Commentary 1
In general, I agree with Chase's answer. However, I also believe there is another, which is more straightforward, and ties in with his.
This paradox results from a wishful thinking bias. It only addresses the "destined to win" sequence, while there is a whole other "destined to lose" possibility that is not being addressed. If there is an apparently correct rationale for the "destined to win" sequence (which this article provides) then there must also be an apparently correct rationale for the "destined to lose" sequence which suffers from the same fallacy in and of itself. Combining the two and balancing them against each other gives you the correct answer of "coinflip". --76.217.81.40 16:21, 29 May 2007 (UTC)
I think it starts off wrong
The steps and resulting formula do not take into account the initial state of the participant. The way I see it, anybody who is presented with this situation first has the option to choose A, B, or N (for Neither). Obviously, since N does not result in any benefit, it is not likely that a rational participant would choose to decline either A or B. However, it is important for N to be included, because it IS an option. The importance of including N becomes apparent when either A, B, or become a loss of benefit. In other words, if the scenario was...
You have $50. Before you are two envelopes that appear identical to you. One contains $100, while the other contains nothing. You may purchase either envelope for $50, but since that is all you have, you may only purchase one. You may also, if you so desire, choose neither and keep your existing $50. ...everything changes. We can now see how valuable having the option to choose N becomes. Now the "player" has a chance to have either $0, $50, or $100. Unlike the original scenario, we have an additional variable to consider, initial condition. The danger of losing money makes the option to not participate a valuable benefit to be considered. While the scenario I presented does vary from the "officially" stated scenario, I still think that the option to consider the initial condition applies. I'm nobody special, but this is my "math statements" for it.
N is a known amount that the participant currently has. A,B are unknown amounts that the participant does not have. N+A or N+B are comfirmed, however, as being greater-than N. Therefore, both A and B are better options than N alone.
I guess I can't represent this in numbers like other people can, but my understanding is that the value of switching is equal to the value of keeping. There is no further benefit, so except for non-rational reasons (greed, random choice, insanity, preference to choose left over right, etc) there should be no switching.
Maybe someone else can tell me how the math works out? - Nathaniel Mitchell 63.239.183.126 20:30, 9 August 2007 (UTC)
Problematic problem formulation
The reason of the paradox is incorrectly formulated problem: We need to notice that we are dealing with a random experiment.
Lets analyze the problem mathematically. First, two definitios:
- A random variable is a mathematical function that maps outcomes of random experiments to numbers.
- Expected value of a random variable is the sum of the probability of each possible outcome of the experiment multiplied by its payoff ("value").
In order to calculate an expected value we must:
1. Define our experiment (Done in 'the setup' part)
2. Declare / Find-out the set of possible outcomes (constant values), and the probability of each outcome (constant probability)
3. Define the random variable (Define a payoff for each outcome of the experiment)
4. Calculate the expected value of our random variable.
Now, regarding the problem, as formulated:
Argument (1) States ‘I denote by A the amount in my selected envelope’.
This argument can be understood in two manners:
a. We are talking about a specific instance of the experiment. A is the amount in the selected envelope in this specific instance, hence, A is constant.
b. A is a random variable that denotes the amount in the selected envelope in our experiment.
Since arguments (4) and (5) deals with the value of A, and not the expected-value of A, we can rule out option (b).
So we are left with the interpretation (a).
Hence, Arguments 1 through 6 are referring to our specific instance of the experiment, and they are flawless.
The problem is with argument (7): it calculates the expected value of a constant. Expected values can only be calculated for random valiables. This is where the problem is incorrectly formulated.
Sorry, but even if we are told that envelope A contains $10, the paradox still holds. To our knowledge (so the paradox would say) we stand a 50% chance of swapping to $20 and a 50% chance of swapping to $5 i.e. gain $10 or loose $5. That is what the formula says. So why is this wrong?
You have to show why it isn't 50%, or why it isn't $20, or why it isn't $5, or why we can't add the terms and divide by 2, or some other logical error.
Let's show a case where the formula works: Say I give you an envelope (A) with some money in it. Say that I also flipped a coin to decide whether to put twice/half that amount in another envelope (B). I give you envelope A, do you want to swap to B?
In this case the formula correctly tells you to swap. However the fixed/variable nature of A is just the same as before. What is different about this new game that means we can use the formula?Fontwell (talk) 18:00, 5 September 2008 (UTC)
solution
The first proposed solution solves the paradox, the rest of the article just muddies the issue in my view. Petersburg 21:11, 12 August 2007 (UTC)
- I disagree. The first solution states that step 7 is the cause of the paradox. If one denotes by A the value in a selected envelope then A must remain constant even if the value of A is unknown (the amount of money in the envelope is not changed, nor is the amount of money in the other envelope assigned to A). While the mistake becomes obvious in step 7, the critical mistake is done at step 6. If your envelope contains A then the other envelope contains either 2A or 1/2A, but the chance is 100/0 rather than 50/50. After all, one of the envelopes (2A or 1/2A) never even existed. Oddly enough, step 2 still remains true as a way for one to calculate the propability, even if there really is only one possibility. One must keep in mind that since we chose to denote one envelope by A, the 50/50 chance has already happened and determined the value in the other envelope (2A or 1/2A) as well as the value of A. Thus it would be foolish to use these values linked to the past propability and use them with it as if they were not linked. --NecaEum 91.155.63.118 00:31, 26 August 2007 (UTC)
Solution
The flaw in the switching argument is as follows:
In the calculation we assume two different cases. One in which the envelopes contain A and A/2 and one in which the envelopes contain A and 2A. These two cases give four possible combinations of the content of the envelopes:
1. A in my envelope and A/2 in the other.
2. A/2 in my envelope and A in the other.
3. A in my envelope and 2A in the other.
4. 2A in my envelope and A in the other.
In the erroneous calculation of the expected value we use only two of the four possible values and assume that the probability of each of them is 0.5.
The correct calculation would be to sum all the four possible values multiplied by 0.25. As we can see that the possible values of both envelopes are the same, the expected value of them is also the same.
- First there are infinite cases, second for the paradox it doesn't matter what we multiply the expected value by as long as its the same factor in both cases.Enemyunknown (talk) 08:47, 18 September 2008 (UTC)
More generally
Expected value is defined as the sum of the multiplication of each possible value of a variable by the probability of its occurrence. In this problem we have no information about possible values. The only thing we know is the proportion between the values in the two envelopes. We cannot calculate expected value based on this proportion only. Yet, we can see that every possible value in one envelope is also possible with the same probability in the other envelope; so the expected value of both envelopes is the same. It is easy to be seen in a case where there is a limited number of possible values with equal probability. For example in one envelope there may be any sum from $1 to $1000 randomly and in the other its double. In the example above we took only two values: A/2 and A. But the same is also true if we have infinite number of possible values at any distribution.
--Rafimoor (talk) 22:22, 2 May 2008 (UTC)
- One envelope is opened so there is only one envelope for which we calculate expected value and it can be half or double the amount in the first so I don't see how what you wrote is supposed to be an answer.Enemyunknown (talk) 08:47, 18 September 2008 (UTC)
Simple Solution
I propose this very simple solution:
- You entered with $0. You just got an envelope with more than $0 in it. Don't debate about switching indefinitely, it doesn't matter which envelope you take, you just got something for nothing. —Preceding unsigned comment added by 129.3.157.107 (talk) 15:18, 8 May 2008 (UTC)
Very simple solution to the 'Two envelopes paradox'
There is one 'unit' of money in one of the envelopes and two units in the other one. Initially the player selects an envelope at random, and if he decides not to switch the return he would expect would be 1.5 units, i.e. the average of the two which is (1+2)/2 = 1.5 units. If the player decides to switch the expected return would be exactly the same, i.e. (2+1)/2 = 1.5 units.
The 'paradox' arises because the player believes he can double his money by switching when he gets the envelope with 2 units in it. If he initially draws this envelope he has no knowledge whether or not this is the higher amount and if he switches will receive half.
This is the strategy of not switching:
Player draws 1 unit (probability 0.5) and doesn't switch. Payoff: 1 unit.
Player draws 2 units (probability 0.5) and doesn't switch. Payoff: 2 units.
Expectation = (0.5 x 1) + (0.5 x 2) = 1.5 units.
This is the strategy of switching:
Player draws 1 unit (probability 0.5) and switches. Payoff: 2 units.
Player draws 2 units (probability 0.5) and switches. Payoff: 1 unit.
Expectation = (0.5 x 2) + (0.5 x 1) = 1.5 units.
It therefore makes no difference whether the player switches or not.
—Preceding unsigned comment added by JandyPunter (talk • contribs) 22:55, 24 June 2008 (UTC)
Comments on Comments
There are an awful lot comments showing why it makes no difference to swap envelopes but they are completely pointless. We know there is no gain in swapping. That is why it is a paradox. What we need is a clear explanation of the logical error(s) in the paradox, which the article gives.
Also, there seems to be a certain amount of confusion in the comments about how there are really only two amounts of money and not three, or how really only one condition is true. Now, I may be missing something but I don't think that this is relevant at all and perhaps it is a misunderstanding of how basic game theory works: All betting revolves around several amounts of money, when in fact there will only ever be one - your horse will win or not but it will only do one of them. The point is that your knowledge is partial and so you have to assign a probability to every possible condition even though only one is "really" true.
While I think the main article is probably correct it still doesn't really satisfy - witness the comments.
My own unhelpful thoughts are that the paradox works well and promotes extended unsatisfying explanations because it carefully mixes several factors that individually would not be too bad but together provide a bit of mess. These are
1) The non specific values for both envelopes.
2) The unspecified distribution of values.
3) The unspecified method of choosing which envelope has more in it.
4) The decision of which envelope has the most in it being prior to the player's choosing of an envelope.
Fontwell (talk) 18:38, 5 September 2008 (UTC)
- I agree. The "two envelopes paradox" is resolved in the first section of the article and the other sections are merely cruft. The real value of this paradox is that it illuminates an important point in statistical methods--namely, the difference between "expected value" and statistical probability. 192.91.147.35 (talk) 01:30, 11 September 2008 (UTC)
Problem Supposedly Solved
The problem in the first approach in the article is that you're allowing yourself to double the high value and halving the low value, which you can't.
m = the sum of the money in both envelopes
considering both cases separately Tlow = (1/3)*m and Thigh = (2/3)*m
A1=(1/2)*Tlow + (1/2)*Thigh = (1/2)*m
Now since you can't halve the low value, the probability of doubling it is 1 and vice versa.
A2=2*(1/2)*Tlow + (1/2)*(1/2)*Thigh = (1/2)*m = A1
As opposed to the original approach:
A2 = (1/2)*2*(1/2)*Tlow + (1/2)*(1/2)*(1/2)Tlow + (1/2)*2*(1/2)*Thigh + (1/2)*(1/2)*(1/2)*Thigh = (5/8)*m
Illegal parts and the parts that are affected by them are bolded for your pleasure.
I'm not 100% sure of this, but it adds up and it makes sense. And it shows that there is no paradox, only failing math.
PS. Sorry I don't know latex syntax —Preceding unsigned comment added by 83.254.22.116 (talk) 06:14, 10 September 2008 (UTC)
Possible solution
I have some ideas about this paradox
- A set of envelopes for which it pays to switch does not exist since the distance between the values is the same no matter from which side it is calculated.
- The reason why it seems to be worth it for both siblings to switch is that the losing one always looses half his money while the winning one gains an equal amount to the one he already has, this is simply due to the way relative differences are calculated. The total sum doesn't change of course.
- I think that if conclusions made for the system when having complete knowledge of the outcomes differ from conclusions made with only partial knowledge the former should be considered correct. This here means that since in case in which we know what is the pair there is no benefit in switching there can be no benefit in switching.
- The hardest problem is with the statement that "one can gain A or lose A/2" which certainly appears to be true (and its not the same numerical value as one comment above states which is easily shown - if you picked 4 the other can be 8 or 2). The problem here is that when we pick C we are comparing the pairs [C/2 C] and [C 2C]. For both pairs gain and loss is equal but in the second pair it is twice as high as in the first. This scaling is the source of paradox. The expected value as defined in the article is also shifted, by calculating gain and loss in relation to expected value we can arrive at the correct result: for first pair the loss is (C/2)/(3C/4)=2/3 and for the second pair the gain is C/(3C/2)=2/3. So both values calculated in respect to expected value (=the middle value of the pair) agree.This can be interpreted that only the results obtained for systems which have the same expected value can be meaningfully compared, in this case that means that gain and loss can only be compared inside pairs and not between pairs. Since the expected value is only multiplied by a constant factor the values can be normalized and compared by taking this factor into account. Enemyunknown (talk) 05:00, 14 September 2008 (UTC)
- Last point rephrased. This problem deals with infinite sequence of possible pairs of values, the paradox appears cause it appears that each pair is equivalent and while we draw C we can assume both [C/2 C] and [C 2C] this is only partly true we can consider both pairs but the result from one pair cannot be directly compared to the result from another pair. [C/2 C] is a pair shifted to the left in the infinite sequence of pairs, this shifting is crucial cause any gain and loss depends on the position of the pair we want to consider in this sequence: [A/4 A/2] [A/2 A] [A 2A] [2A 4A] it is obvious that shifting to left or right changes potential gain and loss by a factor of 2. It is also obvious that none pair in this sequence have gain different to loss. The paradox arises since each value exists in this sequence twice in two neighboring pairs and considering those two adjacent pairs seems like a very intuitive thing to do but this gives rise to the difference in loss and gain. In order to resolve the paradox either potential gain and loss have to be calculated in the same pair or if we want to compare between pairs the gain and loss have to be calculated in a manner independent of the pairs position in the sequence, this can be done by calculating the gain and loss in respect to the sum of envelope contents, this calculation shows that you can always gain or lose 1/3 of the sum you consider is in the envelopes.
- This is a great paradox
Enemyunknown (talk) 18:07, 14 September 2008 (UTC)
Please see if i have a point here:
Answer: The problem lies in the two '1/2' in this equation, 1/2(2A) + 1/2(A/2) =5/4A
First case = {A, 2A} Second case = {A, A/2}
The probability of getting the first case and second case, for an inconsistent variable A, is 0.5. The probability of getting the first case and second case, for a consistent variable A in both the first case and second case, is indeterminate.
Example 1:
A is the value of money in ANY letter. A is 50% the bigger amount and 50% the smaller amount.
Example 2:
A is the value of money in THIS SPECIFIC letter. Let's open the letter and for example, we see $2. Let X define the value in the other letter i.e. X = $1, $4 X is not obtained from a fair toss of the coin, and instead, from an unknown source not given in the question. Thus we do not know the probability of $1 and $4 occuring.
Why is there a difference? In example 1, we are asking for the chances of getting the bigger and smaller value. Since we have the same chance of getting each letter, and there are only two outcomes, the probability is 50%. In example 2, we are asking for two specific outcomes of either {$2, $1} or {$2, $4}. The WAY of obtaining these two specific outcomes cannot be determinated. Thus the probability cannot be determined, and cannot be taken as half. —Preceding unsigned comment added by Kuanwu (talk • contribs) 12:10, 13 March 2009 (UTC)
Comments about the Paradox and the Flaw
First, I have no background in classical paradoxes, so I have no insight into the historical classification or proper statement of this problem or puzzle. Accordingly, my comments are based solely on a logical analysis of the description of the problem as stated in the Wikipedia article as of January 2009. Second, the problem as stated appears to be a paradox because (or in the sense that) it appears that the stated reasoning leads to what can be reasoned to be an incorrect conclusion. Third, if the problem is a paradox, then there must be some flaw in the statements or in the logic of the problem.
About this Discussion
I began writing the following discussion while I was in the process of attempting to understand both the subject Wikipedia article and the paradox itself. Initially, I had no intention of writing a discussion of this length, but I found that as I completed a discussion on each topic, that there was inevitably either an omission, a need for further clarification, or another related topic. Regarding the order of the topics, I initially thought that it would be preferable (for both me and the reader) to have a reasonable understanding of the paradox before I attempted to offer credible comments and discussion about the article. I simply underestimated the amount of discussion for me to do that. When I finally completed all of the related topics, I realized the extent to which my discussion had grown and thought I might be able to summarize the lengthy discussion I had created on the analysis of the paradox. I then added the brief description in the following section. I also did not go back and re-edit what I had previously written, so there may be parts that appear to be redundant or repetitive.
A Brief Description of the Flaw in the Paradox
Basically, the flaw in the paradox is in confusing three related probabilities. First of all it is important to note that the outcome for everything is determined at the time the initial envelope is assigned to the player. However, because the player is unaware of the outcome, the objective of the player is to evaluate the probability based on the available information. If no additional information is provided, as in the version of the paradox where the envelope is unopened, then the reasoning by the player must be based solely on the probability of envelope selection, which is represented to be random or 50:50. This probability is known as a “prior probability.”
There are also two related probabilities associated with the swapping of the envelopes. Because the outcome of a potential envelope swap has effectively been already determined, the actual probability for gain versus loss is either 0:100 or 100:0. The remaining probability is the “prior probability” of the swap as perceived by the player. The tendency of the typical reader and the basis for the paradox is for the reader to transfer the 50:50 probability related to the envelope selection to the win-loss probability of the envelope swap.
Regarding the argument that states that the player can win A and only lose ½A, if A is assumed to be unknown, the statement regarding the player having a probability of winning A and losing ½A is true because A has one value in a winning swap and double that value in a losing swap. Overall, however, there is no net gain or loss. In contrast, if A is assumed to be known (or is known), the statement regarding the player having a probability of winning A and losing ½A is true when there is an equal probability of the other envelope containing an amount equal to either one-half or double the known amount. Otherwise, the statement regarding the player having a probability of winning A and losing ½A is false. If there is no way to assess whether the other envelope contains an amount equal to either one-half or double the known amount, the statement regarding the player having a probability of winning A and losing ½A would have to be considered false because it is not known to be true. Accordingly, there can only be a probability of gain when the statement regarding the player having a probability of winning A and losing ½A is known to be true.
There are also other approaches to analyzing paradox. At least three are fairly popular. One approach relies of what is termed a Bayesian probability analysis that involves creating, proving, and disproving equations. If you are not familiar with the terminology and notation, it is easy to get lost. A second approach relies on an analysis of the distribution from which the amounts are selected thereby showing that the distribution is not uniform and has bias for smaller values thereby offsetting the advantage to win A and lose1/2A. Depending on how the analysis is performed, one can also get lost in equations and notation. A third approach relies on the use of a parallel problem using exaggerated values to illustrate a flaw in the paradox. Although the method works to illustrate a flaw, it does not provide a very comprehensive understanding of what happens in the various variations of the paradox.
Problem Variations and Terminology
In an effort to minimize confusion it may be helpful to note that as of January 2009 the main Wikipedia article discusses four variations of the problem. The first three variations are identified as “The problem,” as “An even harder problem,” and as “A non-probabilistic variant.” For the purpose of my comments I will herein refer to these variations as “the primary version,” “the augmented version,” and “the non-probabilistic version” respectively. The fourth variation is identified in the section “History of the paradox.” For the purpose of my comments I will herein refer to this variation as the “historical version.” Accordingly, it would be helpful in future discussions to be explicit in identifying the respective version or variation.
My Analysis of the Paradox
If you already have a fairly good understanding of the paradox, then you may want to skip ahead to the topics that relate to the subject Wikipedia article. If not, hopefully the following discussion will enhance your understanding.
Key Elements in the Statement of the Paradox
Despite the different variations and the different narratives that are use to present the paradox, several key elements are common and necessary to create the paradoxical condition. Although the Wikipedia article presents these in 12 enumerated statements, it is possible to state the paradox in a more simple form. Accordingly, a basic statement of the paradox is as follows:
An amount of money and double that amount of money are sealed into two envelopes. A player is allowed to randomly select either envelope thereby implying a 50:50 probability in the selection. Thereafter, the player is provided the option to exchange his (or her) envelope for the remaining envelope. If the amount in the player’s initial envelope is designated by A, then it would appear that the player has a 50:50 probability in swapping envelops to gain A or lose one-half A, thus providing an apparent advantage in making the swap.
Although the basic version is sufficient to create the paradox, additional steps or provisions can be added to either emphasize or alter the paradox. One variation is to extend the problem with the argument that if it is advantageous to perform the swap the first time, then by the same reasoning it would be advantageous to repeat the swap again, and therefore again and again, infinitum. This variation helps the reader to discover the paradox in case he (or she) hadn’t already. A second variation is to have the player open the initial envelope and view the amount. This variation complicates the reasoning process by precluding the use of the simplest path of reasoning to solve the paradox.
Three Key Elements that Affect the Probability of the Outcome
First of all, in analyzing any version of the paradox including the four versions of the paradox in the Wikipedia article it is important to note several key elements that affect the probability of the expected outcome. First of all, in all conventional versions of the paradox the outcome is predetermined at the time when the player receives (or is assigned) the initial envelope. By outcome I specifically mean whether or not there is a winning advantage in swapping the two envelopes. Accordingly, it is only because the player is unaware of the predetermined outcome that there exists the perception to the player that there is a probability-related event associated with the outcome. For the player not privy to having any additional clues that reflect on the actual outcome, the probability of the outcome is simply based on the reasoning of player to assess the original theoretical probability of the event. I believe “prior probability” is the term that is normally used to describe this probability. A simple case of this would be in calling the flip of a coin after the coin was flipped but with the outcome not yet known.
A second key element that affects the probability of the expected outcome for the player is having additional information that may provide a truer assessment of the prior probability. For instance, in the coin flip mentioned above, if the player notes that the previous ten coin flips were heads, it may be prudent for the player to consider the possibility of a bias in the coin flip or the possible presence of cheating. Because the probability of ten consecutive heads being a random event is one in 1024, it may be prudent for the player to call heads in an effort to take advantage of the possible bias. And, if cheating is suspected, it may be more prudent for the player to simply not play. Obviously, as the number of consecutive heads increases beyond reasonable expectations, the greater concern the player should have for this apparent improbability. And, although I am mentioning this in my discussion for the purpose of being thorough, I do not see this element as being relevant to any conventional version of the paradox.
A third key element that affects the probability of the expected outcome is a player gaining clues or additional information that alters the prior probability of the event. The classic example of this is the Monty Hall Problem. If you are not familiar with the Monte Hall Problem, it may be beneficial to read about it before attempting to understand this paradox. However, with respect to this paradox, if the player were provided the initial envelope containing an amount prior to the amount being assigned to the second envelope (either one-half or double the amount in initial envelope), the probability of the outcome is not the same as in the normal version of the problem described initially. In this case the order of events has been reversed, whereby the second amount is selected after the assignment of an amount to the initial envelope. In this case, from the perspective of the player, the amount in the initial envelope (designated in the article as A) becomes a fixed, though a-still-unknown amount. Accordingly, swapping the two envelopes under this set of condition does, in fact, always provide the alleged potential advantage of gaining A with only the risk of losing ½A as claimed in the original problem.
In summary, the key element in the above case is that the player receives the initial envelope containing an amount A, and then the second enveloped is assigned an amount equal to either ½A or 2A. Thereafter, when the player is provided a choice to participate in a swap, the swap actually provides a possible gain of A with a possible loss of ½A. Furthermore, if the process were repeated, the administrator would have to reassign new amounts to the remaining envelope after each round. If there were no upper and lower limits restricting the allowable amounts, the process could be repeated indefinitely. Accordingly, the player could never fully lose the initial amount and would have the potential to gain an infinite amount. Calculation of the player’s winnings and losses becomes a simple matter of adding the number of successes and subtracting the number of losses. For example, if the player participated in ten rounds where he (or she) won six times and lost four, the initial amount A will have quadrupled to 4A ( 2^(6 - 4)*A ). Also, to avoid confusion in later discussions, remember that because this case has a different order of events, it is not representative of the other variations discussed in the Wilipedia article. Accordingly, this case was discussed solely for the purpose of providing insight.
Knowing the Amount Contained in the Initial Envelope
Another key element that may have the appearance of affecting the probability of the outcome is having knowledge of the amount contained in the initial envelope. This element is somewhat a paradox in itself. If in the conventional version of the problem the player receives the initial envelope, opens the envelope, and then counts the amount, the claimed potential advantage (of gaining A with only the risk of losing ½A) now becomes more difficult to refute. For example, if the envelope contains $20, the player can easily reason that the second envelope either contains $10 or $40. According, it certainly appears that he (or she) can only gain $20 or lose $10, an apparent advantage (though it will be shown later that this apparent advantage does not always result in an actual advantage).
Regarding the paradox of this case, there are actually two arguments that result in a paradox. Both arguments result from reasoning performed by the player. The first argument is that the player should have reasoned (1) that the amounts in the two envelopes were selected prior to the initial envelope being received by the player, (2) that the event that determined the probability has passed, (3) that the amount in the initial envelope had a 50:50 probability of being the greater or the lesser amount, and (4) therefore there is no advantage to the swap. The second argument is that the player should also have reasoned that the simple action of learning the amount contained in the initial envelope is not an event that can logically be seen as having an effect on the outcome.
Accordingly, the above analysis gets to the heart of the fundamental paradox associated with the two-envelope problem. In brief, how can the above example be so obviously true and yet not actually be true? To answer that, let’s return to the previous example. Accordingly, recall that after the player learns that the initial envelope contains $20, the player easily reasons that the second envelope must contain either $10 or $40, and according, reasons that he (or she) can only gain $20 or lose $10, an apparent advantage. The reason the flaw is so difficult to identify is because the above statement is actually true. The flaw in the reasoning is not in the above statement, but in the fact that there is not a 50:50 probability that the other envelope contains either $40 or $10. This can best be illustrated by using several more examples.
Accordingly, suppose that the two subject envelopes were set up to contain $10 and $20. The player then learns that his (or her) envelope contains $20. In this case the actual probability of the player winning in a swap is zero while the probability of losing $10 is 100 percent. However, because the player only knows the amount in the initial envelope and not the amount in the second envelope, he (or she) is unaware of these probabilities. Worth noting is that the probability of winning and losing is not 50:50 as it may appear to the player, but is actually 0:100. By comparison, suppose that the two subject envelopes were instead set up to contain $20 and $40. As before, the player then learns that his (or her) envelope contains the same $20. In this case, however, the actual probability of the player losing in the swap is now zero while the probability of winning the greater amount of $20 is 100 percent. Again worth noting is that the probability of winning and losing is again not 50:50 as it may appear to the player, but is now 100:0.
Next, if we assemble pairs of amounts to use in setting up our sample problem, we would then discover that when the player swaps amounts, smaller amounts will win at a greater frequency while larger amounts will lose at a greater frequency. For example (and consistent with the above examples), suppose that the pool of amount pairs consists of only pairs of $10 and $20 and pairs of $20 and $40. Again, as before, these amounts are not disclosed to the player. Accordingly, there are then three possible amounts for the initial envelope and four possible outcomes when swapping for the second envelope. As before, let’s represent the amount in the initial envelope as A. Therefore, when A equals $10, the gain is $10. When A equals $20, there are two possibilities; there can be a gain of $20 or a loss of $10. And, when A equals $40, the loss is $20. To assess the overall gain or loss, we need only compare the totals. Accordingly, the total gain over these four possible outcomes is $30 ($10 + $20). Similarly, the total loss over the four possible outcomes is also $30 ($10 + $20). Accordingly, there is no net gain as claimed in swapping for the second envelope.
In brief, depending on the frequency distribution of the amount pairs in the pool, the player will statistically in some form win at a higher rate when the known amount is lower and lose at a higher rate when the known amount is higher. Overall, however, the player will win and lose with a statistical probability of 50:50, and therefore satisfy the original conditions stipulated in the problem. Accordingly, although it is true to state (1) that the probability of A being the smaller amount is one-half and (2) that the probability of A being the larger amount is also one-half, once the value of A is known, however, these statements are no longer true for all specific values of A. Although this appears to violate reason, it does not. Accordingly, recall that the outcome was predetermined before the swap and that the subject probability of winning or losing in the swap is that which is perceived by the player, and this probability can be altered by the available knowledge. Accordingly, if the player learns the amounts contained in both envelopes, few, if any, would question that player’s win-loss probability in the swap hadn’t changed.
In summary, regarding the previously raised question of whether having knowledge of the amount contained in the initial envelope has an effect on the outcome, the answer is that it does not. However, as mentioned above, it does alter the probability as perceived by the player and thereby alters the reasoning used by the player to form his (or her) conclusion. For example, when the player doesn’t know the amount contained in the initial envelope, the simpler path of reason is to represent the initial pair of unknown amounts as simply a single pair, for example C and 2C. However, when the player knows the amount contained in the initial envelope, the path of reasoning using a single pair of amounts represented by a single pair of variables is no longer valid, and it then becomes necessary to represent the possible amounts as two possible pairs having a common member, for example ½A, A, and 2A. And, although both methods will yield the same result (that there is no advantage in swapping), the respective variables represent different amounts and therefore the probability statements that are stated in terms of these variables do not have the same meaning and therefore cannot be interchanged. Accordingly, it is because of our tendency to unknowingly switch between these to different paths of reason that we are easily drawn into the paradox. Furthermore, it is also the reason why the paradox is so difficult resolve.
The Truth about the Argument that a Player Can Win A and only Lose ½A
Regarding the argument that states that the player can win A and only lose ½A, this argument can be either true or false depending on the definition of A. If A is assumed to be unknown and is therefore a variable, the subject argument is true. However, because A has one value in a winning swap and double that value in a losing swap, there is no net gain or loss. In contrast, if A is assumed to be known (or actually is known), then there is no longer an associated 50:50 probability of winning or losing for all specific values of A. And, although A is not assumed to be known in the some versions of the paradox, there is a tendency for the reader to think and reason in terms as if it is known. This is especially true when reasoning in terms of the variable A. Also, the fact that the reader can imagine the player having an envelope containing an amount in the player’s possession creates a tendency for the reader to switch from one line of reasoning (based on the amount being unknown) to the other line of reasoning (based on the amount being known). In either case, however, the player can be observed to win a greater number swaps when A is smaller and to lose a greater number of swaps when A is larger, but overall win and lose a statistically equal number of times. So, although the player can win A and only lose ½A on an individual case, the player does not win A and only lose ½A on an overall basis.
Related to the above, one should be clear in understanding that the act of the player learning or knowing (the amount contained in the initial envelope) does not in any way affect the outcome of the swap. However, it does affect the player’s perception of the swap and thereby the line of reasoning available to the player. For example, if the player knows that the lower and upper limits of the amounts in the envelopes are $1 and $1000 respectively, valid reasoning would indicate the following regarding making a swap: If the initial envelope contains $1, the probability of having a gain is 100 percent. If the initial envelope contains $1000, the probability of having a gain is zero. This should be seen as obvious because one of the possible mating pairs is eliminated by the upper and lower limits. By extending the same reasoning, we can also conclude the following. If the initial envelope contains between $1 and 2$, the probability of having a gain is again 100 percent. Similarly, if the initial envelope contains between $500 and $1000, the probability of having a gain is again zero. And, if the initial envelope contains $2 through $500, the probability of having a gain on any swap is 3:1 or 0.75 (equivalent to a 50:50 probability of winning A and losing ½A).
By the same reasoning, if the amount pairs were restricted to being either $10 and $20 or $20 and $40, in performing the swap the player would always win when having an initial amount of $10, always lose when having an initial amount of $40, and would win and lose 50 percent of the time when having an initial amount of $20. In this example the gains and losses both add to $30, and are thereby equal. However, if we only consider the cases where the initial amount was $20, we would observe the player to have won and lost 50 percent of the time for a net gain of $10 ($20 - $10), an average gain of $5 ($10/2), and a probability of gain on any swap of 3:1 or 0.75 (again the equivalent to a 50:50 probability of winning A and losing ½A).
In summary, if A is assumed to be unknown, the statement regarding the player having a probability of winning A and losing ½A is true because A has one value in a winning swap and double that value in a losing swap. Overall, however, there is no net gain or loss. In contrast, if A is assumed to be known, the statement regarding the player having a probability of winning A and losing ½A is true when there is an equal probability of the other envelope containing an amount equal to either one-half or double the known amount. Otherwise, the statement regarding the player having a probability of winning A and losing ½A is false. If there is no way to assess whether the other envelope contains an amount equal to either one-half or double the known amount, the statement regarding the player having a probability of winning A and losing ½A would have to be considered to be false because it is not known to be true. Accordingly, there can only be a probability of gain when the statement regarding the player having a probability of winning A and losing ½A is known to be true.
Altering the Frequency Distribution of the Amount Pairs in the Envelopes
In the commonly stated versions of the paradox, there is no consideration given to the frequency distribution of the amount pairs that are selected to be contained in the two envelopes. However, one of the four versions of the paradox that are addressed in the Wikipedia article (titled “An even harder problem”) uses a distribution that is specifically meant to be non-uniform. In this variation, the probability is determined by a mathematical formula that results in the amount pair containing the lower amount to have a frequency probability of 3/5, while the amount pair containing the higher amount to have a frequency probability of 2/5.
An example of two such pairs having a uniform frequency distribution is the case where the frequency of a pair of envelopes containing $10 and $20 is the same as for a pair of envelopes containing $20 and $40. In this case a player learning that the initial envelope contained $20 would expect a 50:50 probability that the other envelope contained either $10 or $40. By comparison, an example of two such pairs having a non-uniform frequency distribution equal to 3/5:2/5 is the case where the frequency of a pair of envelopes containing $10 and $20 is 3/5 while the frequency of a pair of envelopes containing $20 and $40 is only 2/5. In this case a player learning that the initial envelope contained $20 would expect with a 60:40 probability that the other envelope contained either $10 as opposed to $40. Because it appears that the frequency distribution could have an effect on the outcome of the subject problem, we need to understand the effects of frequency distribution well enough to make that determination.
First, let’s consider the case where the player does not know the amount in the initial envelope. In this case the problem is completely independent of the distribution of the amount pairs. The smaller or larger amount of any amount pair, despite the frequency of distribution of the pair, is equally probable to become the amount contained in the initial envelope. Because the amount is not disclosed to the player, the player has no additional information and must therefore base his (or her) decision to swap on reason. Because sound reason indicates that there is no advantage in swapping an unknown amount, swapping becomes a neutral proposition.
Next, let’s consider the case where the player does know the amount in the initial envelope. In this case, if the distribution is uniform and if the player knows the lower and upper limits of the amounts in the envelopes, then the probability of gain can be reasoned over a portion of that range. For example, if the amount pairs were restricted to being either $10 and $20 or $20 and $40, in performing the swap the player would always win when having an initial amount of $10, always lose when having an initial amount of $40, and would win and lose 50 percent of the time when having an initial amount of $20. In this example the gains and losses both add to $30, and are thereby equal. However, if we only consider the cases where the initial amount was $20, we would observe the player to have won and lost 50 percent of the time for a net gain of $10 ($20 - $10), an average gain of $5 ($10/2), and a probability of gain on any swap of 3:1 or 0.75 (the equivalent to a 50:50 probability of winning A and losing ½A).
Next, let’s consider the case where the player knows the amount in the initial envelope and is also aware of a non-uniform distribution of the amount pairs. Again, let’s use the previously used ratio of 3/5 to 2/5. If again the amount pairs were restricted to being either $10 and $20 or $20 and $40, in performing the swap the player would always win when having an initial amount of $10, always lose when having an initial amount of $40, and would win 40 percent of the time and lose 60 percent of the time when having an initial amount of $20. However, because the frequency distribution for the $10 and $20 pair is 3/5, while the frequency distribution for the $20 and $40 pair is 2/5, we have to adjust the gains and losses to account for this. Accordingly, in this example the gains and losses are now $28 ($10 *1.2 + $20 * 0.8) and $28 ($10 * 1.2 + $20 * 0.8) respectively, are slightly less, but are still equal. However, as before, if we only consider the cases where the initial amount was $20, we would observe the player to have won only 40 percent of the time and lost 60 percent of the time for a smaller net gain of only $4 ($20 * 0.8 - $10 * 1.2), an average gain of $2 ($4/2), and a probability of gain on any swap of 3:2 or 0.60 (the equivalent to a 40:60 probability of winning A and losing ½A).
In summary, altering the frequency distribution of the amount pairs only affects the probability of winning or losing in a swap when the amount in the initial envelope is assumed to be known and when there is also a known probability of the other envelope containing an amount equal to either one-half or double the known amount. When this is true, however, the actual probability of a gain or loss is no longer as previously stated whereby a player can win A and only lose ½A. Other than in this specific case, altering the frequency distribution of the amount pairs does not affect the probability of winning or losing in a swap.
The Four Versions of the Paradox in the Wikipedia Article
The following discussion relates to the four mentioned versions of the problem that are addressed in the Wikipedia article. To spare yourself possible confusion, be aware that my previous discussions above were related to cases where either the setup was altered or the player had learned the amount contained in the initial envelope. In contrast, the versions of the problem that are discussed below are all cases where the order in the setup has not been reversed and the player has not learned of the amount contained in the initial envelope. Because of this distinction, the respective analysis and solution may not follow the same path of reason as mentioned in some of my discussions above.
The Primary Version
As mentioned previously, the statement of the paradox in the Wikipedia article is accomplished in 12 enumerated statements. Although the 12 statements may or may not have some basis in the historically correct version of the paradox, this version is not only inconsistent with the popular versions, but appears to be extend beyond the basic version required to for gaining a fundamental understanding of the paradox.
Regarding the discussion of the flaw in the paradox, the flaw in the primary version of the paradox is explained mathematically in the section identified as “Proposed solution.” In brief, the flaw is attributed to a misrepresented and thereby incorrect calculation of the expected outcome stated in step 7 of the problem. However, I also found it possible to analyze the flaw by the path of reason provided below.
First, if the player reasons that there are only two envelopes containing two amounts, and for the purpose of reasoning simply calls these amounts C and 2C, then the player can extend this reasoning as follows. If initially the player randomly chooses the envelope containing the smaller amount C, then A equals C by definition. If the player then chooses to swap that amount, amount A, for what could be perceived as being either ½A or 2A, the actual swap can only be for amount 2A, thereby providing an apparent gain of A. In terms of the C, however, the swap is equivalent to having swapped the initial amount C for amount 2C. Because the player initially selected the envelope containing amount C, the smaller amount, there is no such amount equal to ½A in this case.
By comparison, if initially the player randomly chooses the envelope containing the larger amount 2C, then A equals 2C by definition. If the player then chooses to swap that amount, again amount A, for what again could be perceived as being either ½A or 2A, the actual swap this time can only be for ½A, thereby providing an apparent loss of ½A. In terms of C, again, the swap is equivalent to having swapped the initial amount 2C for amount C. Similarly, because the player initially selected the envelope containing 2C, the larger amount, there is no such amount equal to 2A in this case. Accordingly, despite the truth to the statement that the player can gain A and only lose ½A, we can see that this statement is only true because A has twice the value in a loss than in a win. In terms of C, the player can only win or lose the same amount, amount C.
The source of confusion creating the paradox is in being drawn into false reasoning based on believing that because the amount contained in the initial envelope can be represented by A, that A must be a fixed amount. It is easy to be drawn into this trap because the player can visually see and hold the envelope or, at least, imagine doing so. In this context it is then difficult to revert back to the logic and reason that the outcome was predetermined at the time the initial envelope was assigned.
Also worthwhile noting is that statement 6 in what is called “The switching argument” (stated as, “thus, the other envelope contains 2A with probability 1/2 and A/2 with probability 1/2”) is very open to misinterpretation. The truth of this statement is dependent on the precise definition of variable A. As stated previously, although it is true to state that (1) the probability of A being the smaller amount is one-half and (2) the probability of A being the larger amount is also one-half, once the value of A is known, these statements are no longer true for all specific values of A.
The Augmented Version
In the Wikipedia article the augmented version of the paradox alters the frequency probability of the amount pairs that contain a common amount. For instance, in the primary version of the problem the frequency probability of a pair of envelopes containing $10 and $20 is the same as for a pair of envelopes containing $20 and $40. In this variation, the probability is determined by a mathematical formula that results in the pair containing the lower amount to have a frequency probability of 3/5, while the pair containing the higher amount to have a frequency probability of 2/5. Doing this causes some confusion. Although it might appear that this has an affect on the overall 50:50 win-loss probability of a swap, it only does so in certain conditions. Tracking this distinction appears to add another layer to the confusion already present in the original version of the paradox.
Regarding the mathematical formula ( 3/5 * x/2 + 2/5 * 2x = 11/10 x ) that is used to calculate the expected outcome to be 11/10 x, the formula is incorrect for the same reason that the previous mathematical formula ( ½ * 2A + ½ *A/2 = 5/4 A ) was incorrect in the primary version of the problem. Interestingly, in the primary version of the problem the flaw in the formula was uncovered and was explained in terms of the expected amount contained in the two envelopes. The formula for that calculation was ½ * C + ½ * 2C = 3/2 C. For example, if C is equal to $10 and 2C is equal to $20, then the average or expected value is $15.
If we now apply this same reasoning to the augmented version of the problem, the formula becomes 3/5 * C + 2/5 * 2C = 7/5 C. And, if again we use the same values for C and 2C, the average or expected amount contained in the envelopes is $14. Because the frequency distribution of $10 is stipulated to be 3/5 while the frequency of $20 is stipulated to be the remaining 2/5, the expected amount is correct. Accordingly if this reasoning is applicable and sufficient for the “primary version” of the problem as stated in the Wikipedia article, it should also be applicable and sufficient for the “augmented version.” The article simply does not address this issue and instead discusses reasoning along a completely different line. However, despite all of the discussion, it certainly appears that the solution should be identical to that of the primary version of the problem.
Regarding a solution to uncovering the flaw in this version of the paradox (if that is the objective), I also found it possible to analyze the flaw by using the same path of reason I used in the primary version. Accordingly, I provided that path of reasoning below.
First of all, to minimize confusion over the frequency of the pairs of amounts and the amounts themselves, let’s assign designations for everything. Accordingly, let’s call the pair containing the lower amount X and the pair containing the higher amount Y. In terms of the common member B, it follows that X represents B/2 and B, while Y represents B and 2B.
Next, suppose that the envelopes contain the pair designated as X. And, as before, if the player reasons that there are only two envelopes containing two amounts, and for the purpose of reasoning simply calls these amounts C and 2C, then the player can extend this reasoning as follows. If initially the player randomly chooses the envelope containing the smaller amount C, then A equals C by definition. If the player then chooses to swap that amount, amount A, for what could be perceived as being either ½A or 2A, the actual swap can only be for amount 2A, thereby providing an apparent gain of A. In terms of the C, however, the swap is equivalent to having swapped the initial amount C for amount 2C. Also worth noting is that in terms of B, C equals B/2 and 2C equals B. Therefore the swap was also equivalent to having swapped the initial amount B/2 for amount B.
By comparison, if initially the player randomly chooses the envelope containing the larger amount 2C, then A equals 2C by definition. If the player then chooses to swap that amount, again amount A, for what again could be perceived as being either ½A or 2A, the actual swap this time can only be for ½A, thereby providing an apparent loss of ½A. In terms of C, again, the swap is equivalent to having swapped the initial amount 2C for amount C. Again, worth noting is that in terms of B, C still equals B/2 and 2C still equals B. Therefore the swap was also equivalent to swapping the initial amount B for amount B/2. Accordingly, in terms of either B or C, the gain (B/2 or C) and loss (B/2 or C) is the same.
Next, we simply repeat the same reasoning for the pair of amounts designated as Y. Again we would expect to get the same matching gain and loss. However, now worth noting is that in terms of B, C equals B and 2C equals 2B. Accordingly, in terms of either B or C the gain (now B or C) and loss (B or C) is the same.
In this version of the problem the source of confusion creating the paradox extends one level deeper than in the classic version. Accordingly, not only is there the confusion of being drawn into false reasoning based on believing that because the amount contained in the initial envelope can be represented by A, that A must be a fixed amount, there is also the draw to confuse the probability of the frequency of the amount pairs with the probability of the frequency of the amounts within a pair.
The Non-Probabilistic Version
In the Wikipedia article the non-probabilistic version is stated in two enumerated statements as follows: 1. Let the amount in the envelope chosen by the player be A. By swapping, the player may gain A or lose A/2. So the potential gain is strictly greater than the potential loss. 2. Let the amounts in the envelopes be Y and 2Y. Now by swapping, the player may gain Y or lose Y. So the potential gain is equal to the potential loss.
Although the Wikipedia article states that “so far, the only proposed solution of this variant is due to James Chase,” I find this confusing and a somewhat difficult to believe. This variation does not appear sufficiently different from the primary version for the same reasoning to not apply.
Regarding the truth of the two enumerated statements, statement 1 is false and statement 2 is correct. Accordingly, the part of statement 1 that is incorrect is the stated conclusion (“so the potential gain is strictly greater than the potential loss”). The flaw in the conclusion is the result of faulty reasoning between the previous statement (“by swapping, the player may gain A or lose A/2”) and the stated conclusion. The suggested reasoning is that because the player may gain twice the amount that he (or she) may lose, that there must be a potential gain. This conclusion is, in fact, incorrect, because it does not consider the offset created by the fact that swapping larger values of A will result in a greater number of losses than gains, and that swapping smaller values of A will have a greater number of gains than losses. Although neither of the enumerated statements specifically claims the probability for the second envelope to contain either ½A or 2A to be 50:50, it is implied. Accordingly, the implied 50:50 probability for the second envelope to contain either ½A or 2A, however, does also imply that each and every specific values of A must also have that same 50:50 probability. The details of this were discussed in greater detail previously.
Regarding the solution to the paradox, if the player initially chooses the envelope containing the amount Y, then A equals Y as defined in statement number 1. Accordingly, the player can only gain an amount Y in swapping for the remaining envelope containing the amount 2A. In terms of amount A, the player effectively swaps amount A for amount 2A. Because the player initially selected the envelope containing the amount Y, the smaller amount, there is no such amount equal to ½A. By comparison, if the player initially chooses the envelope containing the amount 2Y, then A equals 2Y as defined in statement 1. Accordingly, the player can only lose an amount Y in swapping for the remaining envelope containing the amount ½A. And, in terms of amount A, the player effectively swaps amount 2A for amount A. Similarly, because the player selected the envelope containing 2Y, the larger amount, there is no such amount equal to 2A. According, the player can only gain or lose the amount Y, and the reasoning is an exact parallel to the reasoning in the primary version of the paradox.
Regarding the paradox of the non-probabilistic version being difficult to resolve as claimed in the article, the above explanation does not indicate that to be true. Perhaps there is some misunderstanding or perhaps this issue requires further research.
The Historical Version
In the Wikipedia article the historical version of the problem that is characterized by the narrative of two strangers wagering over the amount of money in their wallets appears to be another version of the Necktie Paradox rather than a version of the two-envelope problem. However, the historical version differs from the Necktie Paradox in that each man may or may not possess information regarding the contents of his own wallet that may alter his own advantage or disadvantage in the wager. Accordingly, if both men are oblivious to the contents of their wallet, the structure of the wager becomes identical to that of the Necktie Paradox. However, if either man is aware of the contents of his wallet, that knowledge can have a significant effect on the decision to partake or not partake in the wager.
For instance, if one of the men were carrying more than an expected amount of money in his wallet, he would be wise to not partake in the wager. Similarly, if one of the men were carrying less than an expected amount of money in his wallet (for example, no money), he would likely have a winning advantage. Also, because either man can have the ability to base the decision of whether to make the wager on his knowledge of the contents of his wallet, the problem provides no rationale for providing credibility to having a 50:50 or random probability for either man winning the wager. However, in the interest of focusing only on the paradox portion of the problem, it would probably be more prudent to play along with the intent of the narrative and not be picky about the unintentional flaws in the narrative.
Despite the history of how the two-envelope problem may have evolved, I disagree that the two-envelope problem is simply another version of the two-wallet problem. First, there is the problem of correlating the parenthetically stated “½” probability to the narrative as I mentioned above. Second, even though neither man knows the amount in the wallet of his rival, either man is not precluded from being fully aware of the amount in his own wallet. And, assuming that this amount is reasonable and thereby does not completely override the presumed 50:50 probability to win or lose, one man knowing the amount does affect the path of reasoning to be distinctly different from the two-envelope problem. Third, the two-envelope problem specifically addresses amounts that differ by a factor of two. Although this does not alter reasoning in the problem, it does simplify the discussion when attempting to represent the amounts as variables. And fourth, because the two-envelope problem involves amounts that are unknown and undisclosed, it allows for the inclusion of the argument that proposed a continuous and endless exchange. That same argument is more difficult to propose in the two-wallet problem because the narrative does not support the exchange of unopened wallets.
Accordingly, it appears out of perspective to relate the two-wallet problem to the two-envelope problem without mentioning the more closely related Necktie Paradox. This is similar to (though perhaps not as important as) misplacing a species in the evolutionary tree.
Bill Wolf (talk) 21:10, 21 February 2009 (UTC)
- There is no point in attacking the article for being inconsistent in its explanations. A Wikipedia article only reports the different opinions that can be found in the published literature. And in a case like this where there are many different opinions by many different authors, we can't expect anything else than a set of incoherent ideas. And that is how it should be. Your personal ideas for solutions are unfortunately of no relevance to the Wikipedia article, unless you publish your ideas in a peer reviewed article first. But please feel free to enhance the article regarding structure and wording, if you can avoid the temptation to be guided by your personal opinions while editing the article. iNic (talk) 19:20, 5 October 2009 (UTC)
Solving the real problem
Sorry to be a spoil-sport but I believe that all the above solutions are beside the point. To resolve the paradox it is necessary to debunk the reasoning given in the section entitled "An Even Harder Problem". I believe that my solution does this, but it may be a case of being a legend in my own lunchtime! Please see http://soler7.com/IFAQ/two_envelope_paradox_Q.htm Soler97 (talk) 00:34, 18 July 2009 (UTC)
Tell me if I'm wrong, but...
I'll use the simplest example from the page
Is the solution to divide the answer by the coefficients (in this case: 2+½)? The end result is 1/2 for whatever values used for the coefficients, which is what logic dictates. Really, please inform me if this has either already been stated and I just missed it, or if I'm just plain wrong. Buddhasmom (talk) 03:32, 3 May 2009 (UTC)
step 6
step six is step four + five, which are mutually exclusive. you cant just add them together. —Preceding unsigned comment added by 76.208.70.63 (talk) 07:50, 17 July 2009 (UTC)
Frequentist vs Bayesian distinction
Does this distinction apply to the 'even harder version' of the paradox? It seems to me that this rigorous statement of the paradox is amenable to frequentist analysis. Can someone comment on this please. Soler97 (talk) 21:37, 22 July 2009 (UTC)
- Why would this variant be more rigorous than any of the others? Can you please explain to me how frequentist reasoning can solve the other variants but at the same time not this one? iNic (talk) 01:18, 2 October 2009 (UTC)
The original two-envelope problem is not a paradox
(Here's a brilliant solution to the "paradox" I recently found on the web)
The original two-envelope problem is not a paradox. It simply reflects the fact that each envelope has a positive value, and the fact that the second choice (after having picked an initial envelope) is not a double or nothing bet.
After picking the first envelope, you are guaranteed at least half of the first envelope (not zero), with the prospect of getting twice that. In order for there to be no expected gain from switching, the second envelope would need to have a 50-50 probability of having either zero dollars or double the first envelope.
What is most notable is that the expect value of the entire game does not change after the first and second choices. Before the first choice, there is a 50% chance of getting $c and a 50% chance of getting $2c. Thus, the expected value of the game -- before any selection is made -- is $1.25c. This doesn't change after the initial choice has been made, as noted in the article.
What would be more paradoxical is if the expected value of the game did change after the selection of the first envelope. Since picking an envelope conveys no information about its value relative to the other envelope, there shouldn't be any change in the expected value of the game.
A variation on this game that destroys the so-called paradox would be to have some one pick one of two colors of chips (e.g., red and green). One of the chips is worth $c and the other has a 50-50 probability of being worth either $0 or $2c. In this case, the expected value of the game before the first selection is $c [or .5($c) .5((.5)$0 (.5)(2c)]; and it remains $c regardless of whether you switch chips or not, since each chip is individually has an expected value of $c. —Preceding unsigned comment added by 216.164.204.65 (talk) 17:40, 18 August 2009 (UTC)
- If "there is a 50% chance of getting $c and a 50% chance of getting $2c" I would expect the expected value to be $1.5c for that game, and not $1.25c. I can't follow the rest of the reasoning either, despite the brilliance you see here. iNic (talk) 01:38, 2 October 2009 (UTC)
Why is there so much debate over this?
Surely the problem is simply that the expectation calculation is making two false assumptions - firstly that you pick one value consistently, and secondly (as pointed out in the article) that a third value is introduced. So for example to do a monte carlo simulation coming out with an expected switch value of (5/4)A, you'd first have to set it up so that one of the values was A, and the other envelope switched between (1/2)A and 2A at random between trials. Then, you'd make sure that the envelope with A in it is picked every time. Clearly, neither of these things are true in the actual problem. This is also why consistent switching could not work - once you switch you have not picked A any more. —Preceding unsigned comment added by 131.111.213.31 (talk) 12:49, 12 September 2009 (UTC)
- I don't know why there is so much debate over this. But I do know that you just added to the amount of text written about this problem. So you might get the answer to your question by asking yourself. iNic (talk) 01:18, 2 October 2009 (UTC)
Here is the correction to the switch argument
The mistake lies in steps 6 and 7, because they combine two different values of the same variable A into a single equation, an inconsistency I'm personally surprised is still in the article. Sorry for the programming-style format, but here is how I found the proper equation for the expected value, and what it results in.
A=small sum
B=large sum
C=first selection
D=other envelope
First, either C=A and D=B, or C=B and D=A
In either case, the equation for expected value is found as follows, simplified:
A/2+B/2=(A+B)/2=(C+D)/2
So, the correct equation for expected value in this situation is (C+D)/2
or ([Item selected]+[Other Item])/[Number of Items]
From this comes
(C+D)/2=((B/2)+B)/2 = (3B/2)/2 = 3B/4
(C+D)/2=(A+2A)/2 = 3A/2
And 3A/2 = 3B/4 by definition
Then,
(3A/2)/3 + (3B/4)/3 = 3A/6 + 3B/12 = A/2 + B/4 = 2A/4 + B/4 = (2A+B)/4 = (4A)/4 = A
(3A/2)/3 + (3B/4)/3 = 3A/6 + 3B/12 = A/2 + B/4 = 2A/4 + B/4 = (2B)/4 = B/2 = A
where C=A, and since A=A, there is equal probability of either envelope having the larger amount. So, there is no difference whether or not one chooses to switch envelopes.
Please correct me if I made any mistakes.
ExtremecircuitzUbox —Preceding undated comment added 22:27, 25 November 2009 (UTC).
My Solution
It seems obvious to me that arithmetic median is inappropriate to determine expectancy of the amount in the other envelope. Instead use geometrical median:
A: amount in selected envelope B: expected amount in the other envelope
B = SQRT((0.5*A)*(2*A)) = SQRT(0.5*2*A*A) = SQRT(A*A) = A
B = A, or in other words keep your envelope unless you need exercise.
It seems that problem setting question that would fit the arithmetic median expectancy is as follows: "The envelope that you have picked has A money. The other one has C more or C less then your's."
the expectancy math from the original article now makes sense:
B = ((A + C) + (A - C)) / 2 = (A + C + A - C) / 2 = (A + A) / 2 = A
I am very uneducated math-wise. Please respond with arguments.
Dariac —Preceding unsigned comment added by 1.2.3.4 (talk) 05:00, 26 November 2009 (UTC)
Solution to original paradox is much simpler
When calculating expected value in:
Letter A denotes two different numbers. First A denotes amount of money in the picked envelope in case when it is smaller than in the other envelope. Second A denotes amount of money in he first envelope in case when there is more money than in the other envelope.
So you cannot add two different variables (mistakenly both named A) like that.
In first point A means "amount of money in the picked envelope" but in the above equation first A means "amount of money in the picked envelope provided that we picked envelope with less money" and the second A means "amount of money in the picked envelope provided that we picked envelope with more money". So the symbol A changes meaning in the course of reasoning and that is the cause of the apparent paradox.
- That is what an expectation value is.
- My concern is that A denotes expectation value of three different random variables across whole reasoning. At the beginning unconditional, and in the calculation two other variables similar to the first one but limited by additional conditions.
- What would be you answer to this question? There are three envelopes containing £ 100, £200, £400. You have the £200 envelope. You are offered the option to swap your envelope for one of the others chosen uniformly at random. Do you swap and why? Martin Hogbin (talk) 10:32, 1 June 2010 (UTC)
- I swap because expected value of swapping is (£400 + £100)/2 = £250 so it's more then the cost of swapping that is fixed at £200.
- I do agree, however, that trying to assign a variable to the value of the unknown envelope that you hold, in the original two enveloped problem, is a bad idea. Martin Hogbin (talk) 10:41, 1 June 2010 (UTC)
- It may not necessarily be bad idea. Bad idea is using same letter to denote similar but significantly different concepts along ones reasoning.
But I must agree that original formulation of this paradox by Maurice Kraitchik is much more interesting and it's much harder to spot error there and I think it can't be done without considering distributions. In my opinion problem is that assumption of 0.5 chance of winning is made too hasty because actual probability of winning is dependent on the values of the contents of my wallet and distribution of the value of wallets contents. If I have more then I can win more but I am also less likely to win so it evens out.
Ignorance about conditions you are operating in does not entitle you assume 0.5 probability and expect correct results. It's actually very nice that in this case you can come up with the correct solution without making any assumptions - just by observing clear symmetry of the game.
- Do you have any sources for this problem and its solutions? A major problem with the original article is that it was mainly unsourced. Can I also suggest that you sign your posts, just type four tildes ( ~~~~ ) at the end of your text and the system automatically signs and dates it for you. Martin Hogbin (talk) 11:40, 1 June 2010 (UTC)
- I'm sorry. I don't have any sources. I just thought this paradox through and these are my conclusions. I'm sorry for not signing but I don't have account here. —Preceding unsigned comment added by 217.113.236.47 (talk) 21:57, 6 August 2010 (UTC)
Not up to Wiki standards
I will confess that I have not read the previous discussion here, but I've got to say this article is not up to Wiki standards. I quote: "There is no solution, which is accepted by all scientists although solutions have been proposed." (Put a "[sic]" after the misused comma.) Well, if there are scientists who cannot see where the misstep is, then there are scientists who are dumb as a box of hammers. My guess is that scientists just have better things to puzzle over. The indefensible mistake is at step seven, although the groundwork is laid earlier, by introducing the symbol A to represent two different quantities in two different contexts. Step six says, "Thus the other envelope contains 2A with probability 1/2 and A/2 with probability 1/2." True enough. We have two correct formulas exhaustively representing possible payouts weighted by probability.
F1: 2A*(1/2)
F2: (A/2)*(1/2)
In F1, "A" unambiguously means the smaller of the two payouts. It is short for "the monetary value of A, given A is the smaller payment." If it means anything else, there is no justification for the formula. Likewise, F2 can only be justified if "A" means the larger of the two payouts. The two formulas cannot be combined as per step 7, because they use the same symbol to mean two different things.
Repair work is easy. Let's call the value of the larger payment "L", and the value of the smaller "S". We know that L = 2*S
In formula F1, it is understood that "A" means the smaller payout. Substitute S for A, giving 2S*(1/2), and in F2, substitute 2*S for A, giving (2*S/2)*(1/2). Now the formulas both use the symbol S to mean the same thing. We may therefore add them up and factor-out S. The result reduces to 3S/2, which is correct.
What is the payout in terms of A? We cannot express it as a product of the form r*A, where r is a real number. We can only express it as a probability distribution, or as an expected value. Jive Dadson (talk) 14:17, 24 December 2010 (UTC)
Edit: The argument uses the symbol A to mean at least three different things. 1) The monetary value of "my selected envelope." 2) The monetary value of envelope, given that it contains the smaller prize, and 3) You know three. Jive Dadson (talk) 14:48, 24 December 2010 (UTC)