Jump to content

Talk:Allais paradox

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Misleading proof

[edit]

Even though studies show that people paradoxically tend to choose Gamble1A, it does not make 1(1) greater than .89(1)+.01(0)+.1(5) The issue should not be represented with a proof because without context the paradox seems to be a calculation error. Suggest change to:

Mathematical demonstration of inconsistency

Experiment 1

Gamble1A is paradoxically preferred over Gamble1B (problematically suggesting Gamble1A > Gamble1B), despite the relationship: 1.00U($1M) < 0.89U($1M) + 0.01U($0M) + 0.1U($5M)

Experiment 2

Gamble2B is preferred, as expected 0.89U($0M) + 0.11U($1M) < 0.9U($0M) + 0.1U($5M) Mcpherson.emily (talk) 21:05, 25 March 2014 (UTC)[reply]

Makes no sense

[edit]

Also in the beginning the paragraphs are nonsensical. Where do the numbers suddenly come from? 1A is a surefire thing to take, since there is no risk of losing. That gives you 1 M. In the latter, 2B is better, since 11% and 10% are so close to each other, and the probability being so small, that you will wish to maximize the winnings should you be lucky. Therefore you pick 2B which offers $5M with 10% chance. Why this is called a paradox is a paradox to me. -Anon 30.06.2006

It's only a paradox if you accept the "independence axiom" which says if there are three different lottery tickets A, B and C, and someone prefers B to A, then they should prefer the combination of B and C to A and C, no matter what C is. Allias carefully constructed C to turn (A and C) into what you call a surefire thing, which made it preferable to B and C, violating the axiom.--agr 22:37, 30 June 2006 (UTC)[reply]
Anon - that's a great illustration of the paradox - why do people prefer 1A in choice 1 and 2B in choice 2, while in fact if 1A is preferable to 1B then 2A must be preferable to 2B. This is because in choosing 1A you will throw away a 10% chance of making an extra $4m to prevent a risk of 1% of getting nothing. In choosing 2A you will do exactly the same thing. I chose the same way you did, for the same reason -- [[User:dilaudid
This article could probably be written more clearly. Think about it this way: there is a game show where each contestant is given a random number between 0 and 99 (he/she does not know the number until after their choice is made). The host then says that if the number is between 0 and 99 (it is guaranteed to be) the contestant wins $1mil (1A). OR, if the contestant is feeling lucky, they can choose the game where having a number between 90 and 99 wins them $5mil. The only catch is that if the contestant gets 0, he wins nothing (1B). Scenario 2 goes like this: Same game, same rules, except that now anything between 1 and 89 has the contestant go home with nothing, and only 90 through 99 lets them win the $1mil (2A). The contestant can also choose to play the game where anything between 0 and 89 loses, but 90 through 99 wins $5mil (2B).
In scenarios 1 and 2, there is only a single number, 0, in which the choices (1A or 1B, and 2A or 2B) result in a loss because of the player's choice. In both scenarios, numbers 90 through 99 multiply the winnings by 5. So why, when the odds are EXACTLY identical, is it common for people to choose 1A and then 2B?
To me, the answer to that question is in our approach to risk, which at times can be somewhat paradoxical in nature. If we are given a 100% chance of winning $1mil, and then introduce a 1% chance of failure because of our greed, and furthermore LOSE because of that risk, we'd beat ourselves up over it for weeks. On the other hand, if we just had an 11% chance to win anything in the first place, dropping down to 10% for a chance to win 5x more doesn't seem that bad. The math tells us both scenarios are equal, yet mathematics can't feel regret over losing $1mil because of its own greed. -Anon2 11:28PM CST 9/21/13

Expert request

[edit]

The final two paragraphs make no sense to me. It may be that they are simply phrased rather badly but perhaps also they make use of jargon I don't understand. Seeing the wikilinked articles doesn't really help. They don't do the job of explaining the paradox very well. If anyone wants to have a stab at rephrasing them please do.  Run!  22:06, 9 February 2006 (UTC)[reply]

allais paradox fallacy

[edit]

The biggest fallacy in this paradox is the confusion between normative and descriptive systems. Economics is a descriptive field that describes how the world works. Decision Analysis (or the loosely used term "expected utility theory") is a normative field, which suggests a system that can help us overcome our biases. It is obvious that people not exposed to decision analysis don't use it when they make their decisions. Stating that people don't do so, and claiming that the theory is therefore false is a fallacy.

People not exposed to normative decision making would end up making bad decisions, and it is correct to point out that they do so. However, this in no way invalidates the axioms of a normative field like decision analysis, just as making an arithmetic mistake in real-life does not invalidate the rules of arithmetic.

Yes, fair distinction! But I respond:
  • Expected utility theory is often used as a descriptive system.
  • The behaviour described is not a bad decision.

-Dan 15:06, 30 May 2006 (UTC)

The behavior described *is* a bad decision - I made the same choice as everyone else, but what to expect - humans aren't perfectly rational. According to the article Allais used this as an argument against the theory that people maximise utility, but this could either mean that maximising expected utility is not the correct thing to do, or that people aren't able to maximise utility in day to day decisions and hence make inferior choices. Since you can change the outcome of the test by framing the example differently, I think the evidence favours the latter explanation. --Dilaudid (talk) 11:41, 28 October 2009 (UTC)[reply]
Am I the only one who went with 1B? Haha! In any case, I don't see why this would disprove utilitarianism. Bill Gates is much more likely to play 1B than a homeless man on the street, because $1mil has a LOT more utility for the homeless man. He'd be less willing to take the 1% risk. Put it this way...if Allais had used "$100 trillion" and "$500 trillion" don't you think most everyone would choose the guaranteed $100 trillion? Or, "$1" and "$5", everyone would pick 1B? I think Allais was just pointing out that humans are guided by more than utilitarianism alone. A machine might quickly choose 1B and 2B, or at least pick consistently across all prize ranges, but a machine won't feel regret when it's that unlucky 1%. Anon2 11:47, 21 September 2013 (CST) — Preceding unsigned comment added by 173.23.82.190 (talk)

allais paradox is faulty ?

[edit]

It is faulty because alternative 1A is not a gamble at all!

1A has a 100% chance to win, to be succesfull. This is a sure win. Attractive is it not? But would anybody in his right mind offer you a million dollars, just like that?

All other options include a chance to loose, to be a failure. This is gambling. Not attractive. We have learned that gambling is dangerous (if not bad). Fear, avoiding risks and dangers are stronger motivators than rationality, much stronger.

1A simply is the natural choice. It is common sense. Common sense is a feeling, it is often hard to explain. Apparantly so hard that the Allais paradox is still being discussed and not settled. I came across it in The Robot's Rebellion by Keith E. Stanovich (2004).

Maybe there is a bigger taboo behind this than not wanting to appear to be motivated by negative feelings like fear to loose, avoiding risk and going for an easy win.

Returning to the oddness of being given a million dollars just like that and then being persuaded to gamble with it. Would you not smell a rat? Yes! Would you say so? No! You would like it to be honest however strange it is. You do not want to offend. You do not want to show mistrust. There you have it. The greatest social taboo of all. To show mistrust. How do relationships come to an end? With the words "I do not trust you anymore." Good for us that we have a subconcious (or TASS or whatever) that is mistrustfull for us, otherwise we would run into a bus all to soon.

Just like the prisoners dilemma the Allais paradox basically is about trust.

The idea that the paradox is a certainty effect has been put forward by several writers. I'll add it in when I get time. JQ 05:20, 6 March 2006 (UTC)[reply]

Not about rationality.

You're not supposed to think about the believability of the experiment...that's why its a thought experiment. Anyway, for his study, I really doubt Allais actually offered his test subjects $1mil and $5mil. Those were simply numbers written on a paper, pushed in front of people who agreed to get paid to take the study. They were, of course, not supposed to answer based on the believability of the percentages given. — Preceding unsigned comment added by 173.23.82.190 (talk) 04:54, 22 September 2013 (UTC)[reply]

Regret

[edit]

This paradox is easily solved by looking at the regret of the player as a utility function. Since there is a lot of regret in not getting anything for your efforts this rules out taking anything other than 1A. However, there is more regret in missing 5m than 1m so you take the 5m.

Oh, you can take into account regret, sure. But then it is no longer a utility function. Any decision based on a utility function would not depend on common outcomes. -Dan 192.75.48.150 17:00, 14 July 2006 (UTC)[reply]

Can someone provide an example of how looking at regret solves the paradox? If I define a regret function R(a,b) as a function of the decision chosen (a) and the best decision given the actual outcome (b), it still seems to me that there is a paradox. Clarification would be appreciated. -Emin

It depends on what you mean by "solve". Personally, I don't think regret removes the paradox, it just helps explain why it exists. First, let's define paradox: "A paradox is a statement that apparently contradicts itself and yet might be true." People tend to choose 2B after choosing 1A, even though the value of 1A (from a purely economic standpoint) is lower than the value of 1B, while the value of 2B is actually HIGHER than 2A. The paradox here arises because expected utility theory tells us that a person should select 1B and then 2B (or, if the risk seems to high, 1A and 2A), as the differences between 1A and 1B, and 2A and 2B are identical. BUT, expected utility theory does not include the psychological damage of the regret felt after losing $1mil because of greed. It can be argued that the regret one would feel explains why this topic seems to be a paradox on the surface but really isn't. In the very worst case, I think that this case SEEMS to be a paradox enough for us to actually call it that. — Preceding unsigned comment added by 173.23.82.190 (talk) 05:43, 22 September 2013 (UTC)[reply]


I'm not sure how to edit in a fully Wiki-compatible way, but the explanation for how regret can account for this effect can be found in Sugden and Loome's 1982 regret Theory paper on page 813 -- this is cited here I believe https://en.wikipedia.org/wiki/Regret_(decision_theory). It's written in reference to Kahneman and Tversky's Problem 4/10 in their 1979 Prospect Theory paper, but the same basic logic applies here. — Preceding unsigned comment added by 98.37.146.238 (talk) 19:06, 25 June 2021 (UTC)[reply]

Initial confusion

[edit]

I think that there is some confusion induced by cramming the two pairs together into one graph, and then not discussing them individually. This approach is probably okay for people who already understand the paradox, but probably confusing for most people who (like me) have never seen the paradox before. If the two pairs can be visually separated

Choice 1 Choice 2
Gamble 1A Gamble 1B Gamble 2A Gamble 2B
Winnings Chance Winnings Chance Winnings Chance Winnings Chance
$1 million 100% $1 million 89% nothing 89% nothing 90%
nothing 1% $1 million 11%
$5 million 10% $5 million 10%

Either with a line

Choice 1 Choice 2
Gamble 1A Gamble 1B Gamble 2A Gamble 2B
Winnings Chance Winnings Chance Winnings Chance Winnings Chance
$1 million 100% $1 million 89% nothing 89% nothing 90%
nothing 1% $1 million 11%
$5 million 10% $5 million 10%

or into two separate tables

Choice 1
Gamble 1A Gamble 1B
Winnings Chance Winnings Chance
$1 million 100% $1 million 89%
nothing 1%
$5 million 10%

with discussion between

Choice 2
Gamble 2A Gamble 2B
Winnings Chance Winnings Chance
nothing 89% nothing 90%
$1 million 11%  
  $5 million 10%

then it may help clarify what the chooser is being asked to choose between, and help clarify the issue. -The Gomm 01:52, 7 September 2006 (UTC)[reply]

PS. Please paste which ever version you like better into the article, or better yet, fix my formatting first.




Personally I think something like the table below is a lot more intuitive, simply because there are less boxes to have to have to get your head around! That said, I've removed the colours which might make it a bit harder to read. Doesn't seem much sense in colouring by amount (you'd simply get rows in different colours then) and colouring in the probabilities would either a) make it hard to distinguish when there's only 1% in between or b) over-emphasise these small differences.

Prize Gamble 1A Gamble 1B Gamble 2A Gamble 2B
$5 million 0% 10% 0% 10%
$1 million 100% 89% 11% 0%
Nothing 0% 1% 89% 90%

--Alex Whittaker 19:08, 20 April 2007 (UTC)[reply]

Here is a table I came up with to help explain it better. (I like the solid bar in the middle (as shown above) though too.)
Experiment 1 Experiment 2
Gamble 1A Gamble 1B Gamble 2A Gamble 2B
Winnings Chance Winnings Chance Winnings Chance Winnings Chance
$1 million 89% $1 million 89% Nothing 89% Nothing 89%
$1 million 11% Nothing 1% $1 million 11% Nothing 1%
$5 million 10% $5 million 10%

--Rajah (talk) 02:01, 13 June 2009 (UTC)[reply]

exactness

[edit]

"Presented with the choice between 1A and 1B, most people choose 1A. Presented with the choice between 2A and 2B, most people choose 2B." How many are most ? this is given presumably as an ideal example, not the result of an actual experiment, but it still makes a difference whether most =60% or 90%. DGG (talk) 16:08, 20 February 2008 (UTC)[reply]

In the absence of experimental data, I don't see how we can be more precise. Also "most people" has an implication about the person's economic circumstances, i.e. a million dollars is a lot of money for them. A trader working at Goldman Sachs would likely choose 1B (if I were his boss I'd fire him if he didn't). So would most people if all the rewards were reduced by a factor of 100,000.--agr (talk) 16:43, 20 February 2008 (UTC)[reply]
is there really no data? If so, i do not see how you can justify the word "most"? 18:22, 20 February 2008 (UTC)
Allais-type experiments have been performed. It would be nice to dig up a good survey article published somewhere, but that's work. --192.75.48.150 (talk) 18:40, 20 February 2008 (UTC)[reply]
I have added a fact tag;unless something can be cited, the sentence must be reworded as hypothetical.DGG (talk) 04:57, 21 February 2008 (UTC)[reply]

Money pump argument

[edit]

Moved from the article[1]

Eliezer Yudkowsky argued on the blog Overcoming Bias [2] that preferring 1A to 1B and 2B to 2A allows one to act as a "money pump". His argument is as follows:
  • There is a switch with settings A and B, initially set at A. Alice will generate a number from 1 to 100; if it is greater than 89, she will consult the switch and proceed with gamble 1A or 1B depending on its setting, giving Bob any winnings.
  • Before generating the number from 1 to 100, Alice asks Bob if he would like to pay $1.00 to move the switch to B. At this time, Bob is choosing between 2A and 2B, so he will agree to pay $1.00 to switch (assuming he values 2B at least a dollar more).
  • After generating the number (which turns out to be greater than 89), Alice asks Bob if he would like to pay $1.00 to move the switch to A. At this time, Bob is choosing between 1A and 1B, so he will again pay $1.00 to switch (assuming he values 1A at least a dollar more).
  • Bob has now paid $2.00 to move a switch and then move it back. In fact, if he is introspective enough, he has paid $1.00 to move a switch, knowing in advance that he would later pay $1.00 to move it back.

The maths seem to be off. Call Gamble 3A/3B an 11% chance that you will get Gamble 1A/1B and an 89% change you will get nothing. This is NOT identical to Gamble 2A/2B. (My first hint that something was wrong was the phrase "argued on the blog") --192.75.48.150 (talk) 18:31, 20 February 2008 (UTC)[reply]

Still massively unclear

[edit]

I'll post the bit I think doesn't help at all ...

"It may help to re-write the payoffs. 1A offers an 89% chance of winning 1 million and a 11% chance of winning 1 million, where the 89% chance is irrelevant. 2B offers an 89% chance of winning nothing, a 1% chance of winning nothing, and a 10% chance of winning 5 million, with the 89% chance of nothing disregarded. Hence, choice 1A and 2A should now clearly be seen as the same choice, and 1B and 2B as the same choice"

Not at all. 1A offers a 100% chance of 1 million. 2A offers an 11% chance of 1 million. So 1A and 2A are so not clearly the same. Asking us to believe that a 1 in 10 chance of a million versus a guaranteed million are the same thing is ridiculous and far from clearing up any confusing it only clouds the issue further. 1B and 2B aren't even close. Yes they both have a 10% chance of 5 million, but 1B also has an 89% chance of a million.

So unless someone already was wealthy, everyone would take 1A. Then if 1A was unavailable we'd all take 1B. 2A is so unattractive as to be almost useless and 2B is only chooseable if you're only offered the choices labelled 2.

No either I'm missing something incredibly obvious, so obvious as to be obfuscated by the multiple unclear clarifications in the article, or this is a terrible paradox. The best I can make of it is that if we're offered 1A or 1B then we'd take 1A, but between 2A and 2B we'd take 2B even if it's a similar gamble to the 1B one we rejected. Excepting that 1B also gives us a near guarantee of at least a million, something missing from 2B. So the "may help a bit" text entirely fails to do so.

I can't accept that it's surprising people would choose a great chance of getting rich over a small chance of getting rich. Nor that if given the choice between two equally unlikely chances of getting rich we'd choose the one that potentially makes us wealthiest. It's so obvious I'm amazed someone bothered to name it as a "paradox" VonBlade (talk) 21:27, 25 February 2008 (UTC)[reply]

Quite so. You, I, and likely most people, think they are NOT the same choice, but the mathematics of expected utility theory says they are. Allais actually intended it to suggest that the theory is wrong. The "part that helps" sounds like it is taking the tone that the theory is right and we are wrong. Or at best, it presumes we intuitively have a sense of the independence axiom, even before it's been explicitly formulated in the article, and thus the two choices "should now clearly be seen as the same choice", when in reality we have no such sense. --192.75.48.150 (talk) 18:50, 18 April 2008 (UTC)[reply]
And they aren't seen as the same choice because of giving up certainty; humans value certainty quite a bit. It could be a .1% chance to lose the $1m with $100m top prize; as long as $1m was a significant amount for people, few people (although perhaps more than in the original problem) would be jumping for $100m until they weren't guaranteed the $1m. It's actually quite an ingenious idea that basically says there's something wrong with how theories behind expected utility can be used to predict human choices. 141.217.233.127 (talk) 13:36, 27 February 2009 (UTC)[reply]

how can something so complicated be applied to ordinary people ?

[edit]

I don't understand how anyone can take seriously a theory which is based on people understanding the example given in the article. Perhaps that is the point - a recent commentary in the journal nature makes the point that to most scientists, economics is nonsense.Cinnamon colbert (talk) 17:35, 20 April 2009 (UTC)[reply]

The theory is not "based on people understanding the example given in the article". In fact, if the subjects used in the study that was

conducted by Allais understood the Allais paradox before taking the test, they'd skew the results. Simply put, this theory is based on the findings that most people will take the guaranteed safe, yet much less profitable option when given the opportunity, yet will do the exact opposite if a good amount of risk is already present. -Anon2

It's not really that complicated.

I always use the following example. Suppose the choices in the first case are between 1A and 1B, where

1A: $1 million w/ prob. 1

2A: $4 million w/ prob. 98/99, 0 otherwise

In the second case the choices are:

2A: $1 million w/ prob. 99/100, 0 otherwise

2B: $4 million w/ prob. 98/100, 0 otherwise

The paradox arises when people choose 1A over 1B, but also say they would choose 2B over 2A if that is the choice. To see why this is rather odd, implement the second choice as follows. I put 100 balls in an urn. 98 are black, 1 is red, and 1 is blue. I give you the choice between 2A and 2B. I will draw a ball at random. If you choose 2A you "win" if the ball is black or red. If you choose 2B you "win" if the ball is black. Suppose you choose 2B. I draw a ball, but before telling you which color it is I tell you whether it is or is not blue and then offer a chance to change you mind. If I tell you it is blue, it doesn't matter, you get zero either way. If I tell you it is not blue, you would want to switch if you prefer 1A to 1B because once blue is ruled out, that is the choice. Nonetheless, people do act this way. Ceberw (talk) 22:32, 9 September 2011 (UTC)[reply]

What you're describing seems to be a weird mix of the Monty Hall problem and the Allais paradox. In any case, your Allais paradox example is faulty for two reasons: 1) the numbers are too close together. While your 1A is very safe, which is required, 2A should be somewhat risky, so that the people answering aren't tempted to pick it just because they're confident they'll win. 2) The differences in odds between your 1A and 1B, and your 2A and 2B need to be the same. You don't want somebody changing their minds between 1 and 2 simply because one of those two offers better odds. -Anon2

a better way to explain the paradox

[edit]
Experiment 1 Experiment 2
Gamble 1A Gamble 1B Gamble 2A Gamble 2B
Winnings Chance Winnings Chance Winnings Chance Winnings Chance
$1 million 89% $1 million 89% Nothing 89% Nothing 89%
$1 million 11% Nothing 1% $1 million 11% Nothing 1%
$5 million 10% $5 million 10%

I believe there is no mention of the calculations of expected utility in the article. Something along the following lines should be added: "The expected utility of the various bets are as follows: For experiment choice between 1A and 1B: $1 million for 1A, $1.39 million. For experiment 2, where the choice is between 2A or 2B: $0.11 million for 2A and $0.5 million for 2B. Clearly in both cases the B choice has higher expected utility. Paradoxically, human subjects will choose 1A preferentially over 2B and at the same time choose 2B over 2A. That is, they are risk averse with respect to sure things, but are willing to take risks with unlikely odds." --Rajah (talk) 02:10, 13 June 2009 (UTC)[reply]

But those aren't expected utilities (which require an assumption on the actual utility function), they're just simply expected values. I realize that people want to make the point of this paradox easier to understand but pretty much all attempts so far got the basics wrong. Reverting.radek (talk) 02:52, 13 June 2009 (UTC)[reply]
Agreed radek, but the expected return should be included in the table. I think some people didn't immediately realize that 1B and 2B return much more money than do their counterparts, even when the added risk is figured in. — Preceding unsigned comment added by 173.23.82.190 (talk) 05:02, 22 September 2013 (UTC)[reply]

Infinities

[edit]

What if the utility function is -infinity at 0 wealth? In that case, it is rational to choose 1A (otherwise your expected utility is -infinity) and 2B (because both A and B are equivalent in that the expected utility is -infinity). —Preceding unsigned comment added by Emin63 (talkcontribs) 13:59, 17 February 2010 (UTC)[reply]

That's clever. I still don't think you'd choose 2B though - you would simply be indifferent between 2A and 2B. Both would give -inf utility. Dilaudid (talk) 09:29, 30 July 2010 (UTC)[reply]

Most people choose 1A?

[edit]

This is the part I don't understand. When given the choice, I for certain would choose 1B! 10% chance of getting $4M extra is better than 1% chance of getting nothing, isn't it? 99.236.60.25 (talk) 01:36, 7 December 2011 (UTC)[reply]


It's a difference in qualities

[edit]

I think the actual paradox about all of this is all the needed context that people overlook. What happens if u take smaller sums of money? Or richer people?

I doubt, most people really see a big difference between 1 million and 5 million - it's just both "a lot". So the 100% 1A choice win's by default. In 1B 1% Chance to loose may be low - but it's still a chance to loose. 1A has 100% that's - obviously - 0% chance to loose. So its a choice between "a lot" at 100% chance or "a lot" at 99% chance. Meaning: One is a silly gamble, the other a sure thing. 2A and 2B are both gambles. I think the mechanic of thoughts behind this is actually: "Both choices are bad choices." --> "It's a gamble altogether." --> "If it's all about luck, then it's just a game --> might as well take the risk, because it's more exciting."

The question is: "Are u a gambler?"... In extension: "Is it smart to be a gambler?" 

— Preceding unsigned comment added by 78.48.126.58 (talk) 00:38, 20 July 2016 (UTC)[reply]