Jump to content

Talk:Nash equilibrium/Archive 2

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1Archive 2

Limitations of NE

I disagree with the comment "This indicates one of the limitations of using the Nash equilibrium to analyze a game".

The Nash equilibrium is a predictive tool, and indeed it correctly predicts the (unfortunate) result if self-interested players participate in a Prisoner's dilemma type situation. (as borne out in reality, for instance, over fishing of the world's oceans)

The fact that Nash equilibrium correctly predicts an undesirable result is hardly a flaw or limitation.

Robbrown

I agree with Robbrown, that is not a limitation, it correctly predicts the end out come of a prisoner's dilemma. I will change it, also there are better definitions for the Nash equilbrium out there, it might just be better to quote a source on their definition of one like here(http://www.gametheory.net/Dictionary/NashEquilibrium.html )"

Nash equilibrium, named after John Nash, is a set of strategies, one for each player, 
such that no player has incentive to unilaterally change her action. Players are in 
equilibrium if a change in strategies by any one of them would lead that player to earn 
less than if she remained with her current strategy.

--ShaunMacPherson 18:15, 15 Mar 2004 (UTC)

I disagree with the sentence "Therefore, in 1965 Reinhard Selten proposed subgame perfect equilibrium as a refinement that eliminates equilibria which depend on non-credible threats." The removal of non-credible threats dependent equilibria is dependent on the assumption of perfect information. This sentence might give a false impression that subgame perfect equilibrium solves the problem of non-credible threats.

Tomas —Preceding undated comment added 09:41, 13 August 2012 (UTC)

You need to say whether all games have a nash eqlib.

Dr. T. Roberts. 98.197.163.72 (talk) 16:49, 6 November 2010 (UTC)

I second that. There is a "proof of existence" in the article, but the theorem being proven is not stated, and (to me at least) not clear. 188.169.229.30 (talk) 00:00, 25 November 2012 (UTC)

Poker

How come poker isn't even mentioned in the article?

Nash himself used poker as an application, in fact, the major application in his original paper.

YohanN7 (talk) 04:31, 7 December 2012 (UTC)

"Nash equilibria in a payoff matrix" vs "Competition game"

I am confused because when I used the matrix method of finding nash equilibriums on the "Competition game", I got only one result - (0,0). Therefore, for my understanding these two sections are inconsistent, unless some additional conditions are present. — Preceding unsigned comment added by Gajatko (talkcontribs) 23:51, 26 September 2014 (UTC)

Mistake in Competition_game

"This game has a unique pure-strategy Nash equilibrium: both players choosing 0 (highlighted in light red). Any other strategy can be improved by a player switching his number to one less than that of the other player."

That can't possibly be correct. Consider the starting point of (0,1). The player with 0 can't improve by switching to one less than the other, because 1-1=0 and he's already there. The player with 1 can't improve by switching to one less than the other, because 0-1=-1 which is not an allowable integer in the range 0 to 3.

Somebody who knows the correct statement, please fix this text. 198.144.192.45 (talk) 21:45, 25 May 2015 (UTC) Twitter.Com/CalRobert (Robert Maas)

Rock-paper-scissors is not an example of a Nash equilibrium!

It is a game, not an equilibrium. It may HAVE a Nash equilibrium solution, but the game itself is a game, not a strategy for playing a game. GeneCallahan (talk) 21:49, 8 March 2016 (UTC)

I cannot find Rock-paper-scissors in the article (except for a link at the bottom). YohanN7 (talk) 08:48, 9 March 2016 (UTC)

Poker

I have made this request before. Poker is a major application and is also an (the only?) example in the original publication by Nash. A Nash equilibrium for the game of heads-up fixed limit Hold'em has been found by a research team at the university of Alberta, Georgia. This is verifiable; there is a publication in Nature as well as in specialized journals. I could dig them up if requested.

I don't have the required expertise to write a section myself. YohanN7 (talk) 08:58, 9 March 2016 (UTC)

Dr. Carmona's comment on this article

Dr. Carmona has reviewed this Wikipedia page, and provided us with the following comments to improve its quality:


Several comments:

(1) On the paragraph: Game theorists use the Nash equilibrium concept to analyze the outcome of the strategic interaction of several decision makers. In other words, it provides a way of predicting what will happen if several people or several institutions are making decisions at the same time, and if the outcome depends on the decisions of the others. The simple insight underlying John Nash's idea is that one cannot predict the result of the choices of multiple decision makers if one analyzes those decisions in isolation. Instead, one must ask what each player would do, taking into account the decision-making of the others.

At this point of the article, it is hard to understand what ``John Nash's idea means since it has not yet been said that John Nash created the concept of Nash equilibrium. My suggestion: Replace with ``Nash equilibrium.

(2) On the paragraph: The modern game-theoretic concept of Nash equilibrium is instead defined in terms of mixed strategies, where players choose a probability distribution over possible actions. The concept of the mixed-strategy Nash equilibrium was introduced by John von Neumann and Oskar Morgenstern in their 1944 book The Theory of Games and Economic Behavior. However, their analysis was restricted to the special case of zero-sum games. They showed that a mixed-strategy Nash equilibrium will exist for any zero-sum game with a finite set of actions.[8] The contribution of Nash in his 1951 article Non-Cooperative Games was to define a mixed-strategy Nash equilibrium for any game with a finite set of actions and prove that at least one (mixed-strategy) Nash equilibrium must exist in such a game. The key to Nash's ability to prove existence far more generally than von Neumann lay in his definition of equilibrium. According to Nash, "an equilibrium point is an n-tuple such that each player's mixed strategy maximizes his payoff if the strategies of the others are held fixed. Thus each player's strategy is optimal against those of the others." Just putting the problem in this framework allowed Nash to employ the Kakutani fixed point theorem in his 1950 paper, and a variant upon it in his 1951 paper used the Brouwer fixed point theorem to prove that there had to exist at least one set of mixed strategies that mapped back into themselves for non zero-sum games, namely, a set of strategies that did not call for a shift in strategies that could improve payoffs.

Specifically, I would change ``set of mixed strategies that mapped back into themselves for non zero-sum games, namely, a set of strategies that to ``mixed strategy profile that mapped back into itself for finite-player (not necessarily zero-sum) games, namely, a strategy profile that . I would perhaps add a note at the end of the paragraph saying: ``See Carmona, G. and K. Podczeck, 2009, ``On the Existence of Pure Strategy Nash Equilibria in Large Games, Journal of Economic Theory, 144, 1300-1319 for (an unified view on) results for infinite-player games.

(3) In the paragraph: Informally, a set of strategies is a Nash equilibrium if no player can do better by unilaterally changing their strategy. To see what this means, imagine that each player is told the strategies of the others. Suppose then that each player asks themselves: "Knowing the strategies of the other players, and treating the strategies of the other players as set in stone, can I benefit by changing my strategy?"

I would replace ``a set of strategies is a Nash equilibrium if no player can do better by unilaterally changing their strategy. with ``a strategy profile is a Nash equilibrium if no player can do better by unilaterally changing his or her strategy.

(4) In the paragraph: If any player could answer "Yes", then that set of strategies is not a Nash equilibrium. But if every player prefers not to switch (or is indifferent between switching and not) then the set of strategies is a Nash equilibrium. Thus, each strategy in a Nash equilibrium is a best response to all other strategies in that equilibrium.[9]

I would replace ``set of strategies with ``strategy profile.

(5) Section 5 Stability. I didn't understand it and I am not sure if it is correct, specifically I don't know of the result attributed to John Nash stated in its second paragraph. Reference to Mertens-stable equilibrium is missing; I am also not sure about this, as stability in the sense of Mertens (specifically, backward and forward induction) is relevant for extensive-form games, whereas the discussion in this section seems to be for normal-form games. I would either delete it for the moment or ask Andrew McLennan to revise it (contact below).

(6) Section 6. I would rewrite it as follows: In his Ph.D. dissertation, John Nash proposed two interpretations of his equilibrium concept, with the objective of showing how equilibrium points ``(...) can be connected with observable phenomenon. One interpretation is rationalistic: if we assume that players are rational, know the full structure of the game, the game is played just once, and there is just one Nash equilibrium, then players will play according to that equilibrium. This idea was formalized by Aumann, R. and A. Brandenburger, 1995, ``Epistemic Conditions for Nash Equilibrium, Econometrica, 63, 1161-1180 who interpreted each player's mixed strategy as a conjecture about the behaviour of other players and have shown that if the game and the rationality of players is mutually known and these conjectures are commonly know, then the conjectures must be a Nash equilibrium (a common prior assumption is needed for this result in general, but not in the case of two players. In this case, the conjectures need only be mutually known).

A second interpretation, that Nash referred to by the mass action interpretation, is less demanding on players: ``[i]t is

unnecessary to assume that the participants have full knowledge of the total structure of the game, or the ability and inclination to go through any complex reasoning processes. What is assumed is that there is a population of participants for each position in the game, which will be played throughout time by participants drawn at random from the different populations. If there is a stable average frequency with which each pure strategy is employed by the ``average member of the appropriate population, then this stable average frequency constitutes a mixed strategy Nash equilibrium. For a formal result along these lines, see Kuhn, H. and et al., 1996, The Work of John Nash in Game Theory, Journal of Economic Theory, 69, 153-185.


We hope Wikipedians on this talk page can take advantage of these comments and improve the quality of the article accordingly.

Dr. Carmona has published scholarly research which seems to be relevant to this Wikipedia article:


  • Reference : Guilherme Carmona, 2004. "Nash Equilibria of Games with a Continuum of Players," Game Theory and Information 0412009, EconWPA.

ExpertIdeasBot (talk) 13:34, 11 June 2016 (UTC)

Prisoners dilemma

It would be nice if the accompanying table could be converted to a picture (SVG, whatever) instead of an editable table. I believe a majority of the novices misunderstand, and mistake NE for "optimal", and accordingly (and incorrectly) edit the table so that "both defect" has higher payout than "both cooperate". The result is that the section is incorrect a large portion of the time.

This phenomenon also indicates that the text isn't as good as can be. YohanN7 (talk) 11:32, 16 November 2015 (UTC)

Sorry for not responding earlier, the page didnt have the talk tab at the top of the page for some reason. Then my battery went dead. I suspect browser is infected with MiB malware. iPad, Sefari &Firefox. As I put in the last edit, you can verify the "defect" as the longer jail-time in multiple sources including the Wikipedia main article for "Prisoners Dilemma". Or if you need a seperate source Kahn Academy has a video showing the same result, or any other source. TakenItEasy1 (talk) 23:44, 20 October 2016 (UTC)

I do agree the language is vague and can be confusing, e.g. using "cooperate" could be interpreted to be cooperate with the 2nd prisoner or cooperate with the police, and "defect" is inaccurate. "betray" and "not betray" makes it clear.

TakenItEasy1 (talk) 23:49, 20 October 2016 (UTC)
Sorry again, forgot the colon last entry and couldn't find a way to correct. I can see the iPad isn't goig to work well here as the KB hides the edit box. Your're correct, I'm new so still figuring out the nuances. Has a forum format for discussion page ever been considered? TakenItEasy1 (talk) 00:34, 21 October 2016 (UTC)
Hm. Your recent edits to the article clearly shows the need to convert this to a non-editable picture. What you and many others misunderstand is that a Nash equilibrium may not be the best or optimal outcome.
The optimal outcome (from the point of view of the two prisoners) is that they both cooperate (do not squeal). They get rewarded two points each, but do not become totally free (3 points = no jail sentence) because a confessing criminal is lacking. If they both are defecting (squeal) they get only one point each, but then the only way for any of the prisoners to change tactics is to not say anything (not squeal). Such a change of tactics will set the other prisoner free (3 points) and result in a full sentence (0 points) for the non-squealer. This is a Nash equilibrium. No prisoner can unilaterally change his tactics to improve his own situation.
By contrast, if both cooperate (do not squeal, optimal outcome taken toether, (2, 2)), then any of the two could simply chose to squeal to improve his reward getting 3 points (no sentence) while the other poor sod gets the full jail sentence (0 points). In other words, both cooperate is not a Nash Equilibrium because either player can improve his situation by changing tactics. YohanN7 (talk) 08:16, 21 October 2016 (UTC)

ok, I now see the disconnect in our understanding. You see the number as a reward for the prisoners, while I understand it to be the penalty, in years, for the prisoners sentences. I think the fact that the table is labeled as a payoff table, which, I think refers to payoff in terms of what the jailers incentives are supposed to be in terms of achieving the maximum combined jail sentences.

I think, if you click on the link to the main article for prisoners dillemma, it should clarify the problem. TakenItEasy1 (talk) 20:18, 24 October 2016 (UTC)

I can't believe I forgot the indent symbols again!
TakenItEasy1 (talk) 20:22, 24 October 2016 (UTC)

Proof is Unclear

Quoting from the section Alternate proof using the Brouwer fixed-point theorem:

The gain function represents the benefit a player gets by unilaterally changing their strategy. We now define  where

The symbol is not defined anywhere which makes the proof unreadable. Can we include more details so that the reader finds it smooth to read through the proof?

- Sudarshansudarshan (talk) 18:44, 16 June 2018 (UTC)
Wait, no, the bit you’ve quoted *ends with* the definition of . What you meant is different from what you’ve actually said, right? (Happens to us all . . !) Can you clarify? – SquisherDa (talk) 00:16, 20 November 2018 (UTC)

confusion

This article needs an example that makes more sense. I don't really understand this concept.

I agree. A better example would be two farmers. Separately they both have the highest profit by cultivating their fields with a tractor. They are in equilibrium. But if they cooperate in sharing the use, and the cost of one tractor and increase their net profit by half the cost the tractor each. The failing of cooperation is if one refuses to pay his share. Then the cost of the whole tractor accrues to the other farmer while the shirking farmer receives the benefit of the tractor without paying. Recognizing that the greatest benefit for an individual is to shirk the agreements in contracts, and that if all parties play by the same rules that the greatest benefit will accrue to themselves by shirking all will shirk and none will benefit, the contracts must have compensatory costs of shirking to maintain equilibrium where they both play by the same rules and get the same benefits and suffer the same costs of cheating. In the movie "A Beautiful Mind" The example where the men agree not to approach the blond so they can all get a girl is betrayed if one of the men does go for the blond himself. If all of them know the others will cheat then there will be no agreement and none will get either the blond or her friends. 98.164.64.98 (talk) 05:49, 2 February 2019 (UTC)

John Nash

So what did Nash do, besides defining this equilibrium? Are there any interesting theorems here? AxelBoldt

Well, what was interesting was mainly this (I'm typing this from memory, so don't quote me or put this into the article without checking with some up-to-date math nerd): for all games for which there was previously known "solution" (for some definition of solution appropriate to that type of game), Nash proved that those existing solutions were Nash equilibria; and further, he showed that any reasonable definition of "solution" for any other type of game must be a subset of the Nash equilibria for that game. And finally, he showed how to find the Nash equilibria. So he made it much easier to solve all kinds of games--even those for which a definition of "solved" isn't clear--by reducing the problem to finding all the Nash equilibria and evaluating them. --LDC

Nash also proved an existence theorem-- some conditions sufficient for his equilibrium to exist.
--editeur24 (talk) 00:17, 20 April 2020 (UTC)

Stability should be spun off as a stub for a new article

The section of stability should perhaps be deleted and put elsewhere as an article on Stability(game theory). It isn't really relevant to Nash equilibrium. Stability ideas can be applied to any equilibrium concept, and you can fully understand Nash equilibrium while knowing nothing about stability. --editeur24 (talk) 00:59, 20 April 2020 (UTC)

Description of computing mixed-strategy NE

I added the word "pure" in section of computing mixed-strategy NE. It is crucial to distinguish between pure and mixed strategy as the former is a component of the latter. In the description it was missed. — Preceding unsigned comment added by 31.183.237.65 (talk) 11:49, 10 September 2020 (UTC)

Finding NEs

Should there be no discussion/reference of how to actually find NE in particular types of games?

I mean:

  • 2 player games
    • zero sum-games -> linear program
    • general sum games -> LCP
  • k player games -> ?

(that's how I got here. I'm looking for methods to find a NE in multiplayer games)

There is no algorithm that finds all the Nash equilibria. It would actually be useful to explain that, since students often ask about it, just to tell them that it's hopeless in general. There are methods for continuous-strategy games that satisfy strong convexity properties, but those are for very special cases. Actually, maybe there are methods for zero-sum games too, but those are also very special and do not come up much in practice, so little that I tend to forget about them. 
editeur24 (talk) 02:58, 11 September 2020 (UTC)

Continuous set?

The article says: "if the set of strategies by player i, is a compact and continuous set"

What the heck is a continuous set?

This seems to be fixed now. I can't find that phrase.--editeur24 (talk) 16:20, 19 December 2020 (UTC)

Splitting proposal: Existence Proofs for Nash Equilibrium

I propose splitting off Existence Proofs for Nash Equilibrium from Nash equilibrium. The two sections on existence proofs are no doubt very useful for some readers, but only to a very few who come to this article, and they would easily be able to find the link in it to the new article. The sections are very technical, and of no interest to most readers. Also, it might be easier for people searching for information on existence proofs to find this from search engines if it's a separate article. editeur24 (talk) 23:22, 19 December 2020 (UTC)

Network strategy error

This paragraph talks of a choice of three routes from A to D, those being ACD, ABD and ABCD. BZZT! What about ACBD? Davidrust (talk) 20:44, 19 March 2021 (UTC)

Simplified Form and Related Results Archived 2021-07-31 at the Wayback Machine

None of the links behind the above line in the external links section work, or make sense. The wayback machine landing page does not seem to contain this title. Dbague (talk) 01:19, 22 February 2023 (UTC)