Jump to content

Bayesian persuasion

From Wikipedia, the free encyclopedia
(Redirected from Draft:Bayesian persuasion)

In economics and game theory, Bayesian persuasion involves a situation where one participant (the sender) wants to persuade the other (the receiver) of a certain course of action. There is an unknown state of the world, and the sender must commit to a decision of what information to disclose to the receiver. Upon seeing said information, the receiver will revise their belief about the state of the world using Bayes' Rule and select an action. Bayesian persuasion was introduced by Kamenica and Gentzkow,[1] though its origins can be traced back to Aumann and Maschler (1995).

Bayesian persuasion is a special case of a principal–agent problem: the principal is the sender and the agent is the receiver. It can also be seen as a communication protocol, comparable to signaling games;[2] the sender must decide what signal to reveal to the receiver to maximize their expected utility. It can also be seen as a form of cheap talk.[3]

Example

[edit]

Consider the following illustrative example. There is a medicine company (sender), and a medical regulator (receiver). The company produces a new medicine, and needs the approval of the regulator. There are two possible states of the world: the medicine can be either "good" or "bad". The company and the regulator do not know the true state. However, the company can run an experiment and report the results to the regulator. The question is what experiment the company should run in order to get the best outcome for themselves. The assumptions are:

  • Both company and regulator share a common prior probability that the medicine is good.
  • The company must commit to the experiment design and the reporting of the results (so there is no element of deception). The regulator observes the experiment design.
  • The company receives a payoff if and only if the medicine is approved.
  • The regulator receives a payoff if and only if it provides an accurate outcome (approving a good medicine or rejecting a bad one).


For example, suppose the prior probability that the medicine is good is 1/3 and that the company has a choice of three actions:

  1. Conduct a thorough experiment that always detects whether the medicine is good or bad, and truthfully report the results to the regulator. In this case, the regulator will approve the medicine with probability 1/3, so the expected utility of the company is 1/3.
  2. Don't conduct any experiment; always say "the medicine is good". In this case, the signal does not give any information to the regulator. As the regulator believes that the medicine is good with probability 1/3, the expectation-maximizing action is to always reject it. Therefore, the expected utility of the company is 0.
  3. Conduct an experiment that, if the medicine is good, always reports "good", and if the medicine is bad, it reports "good" or "bad" with probability 1/2. Here, the regulator applies Bayes' rule: given a signal "good", the probability that the medicine is good is 1/2, so the regulator approves it. Given a signal "bad", the probability that the medicine is good is 0, so the regulator rejects it. All in all, the regulator approves the medicine in 2/3 of the cases, so the expected utility of the company is 2/3.

In this case, the third policy is optimal for the sender since this has the highest expected utility of the available options. Using the Bayes rule, the sender has persuaded the receiver to act in a favorable way to the sender.

Generalized model

[edit]

The basic model has been generalized in a number of ways, including:

  • The receiver may have private information not shared with the sender.[4][5][6]
  • The sender and receiver may have a different prior on the state of the world.[7]
  • There may be multiple senders, where each sends a signal simultaneously and all receivers receive all signals before acting.[8][9]
  • There may be multiple senders who send signals sequentially, and the receiver receives all signals before acting.[10]
  • There may be multiple receivers, including cases where each receives their own signal, the same signal, or signals which are correlated in some way, and where each receiver may factor in the actions of other receivers.[11]
  • A series of signals may be sent over time.[12]

Practical application

[edit]

The applicability of the model has been assessed in a number of real-world contexts:

Computational approach

[edit]

Algorithmic techniques have been developed to compute the optimal signalling scheme in practice. This can be found in polynomial time with respect to the number of actions and pseudo-polynomial time with respect to the number of states of the world.[3] Algorithms with lower computational complexity are also possible under stronger assumptions.

The online case, where multiple signals are sent over time, can be solved efficiently as a regret minimization problem.[17]

References

[edit]
  1. ^ Kamenica, Emir; Gentzkow, Matthew (2011-10-01). "Bayesian Persuasion". American Economic Review. 101 (6): 2590–2615. doi:10.1257/aer.101.6.2590. ISSN 0002-8282.
  2. ^ Kamenica, Emir (2019-05-13). "Bayesian Persuasion and Information Design". Annual Review of Economics. 11: 249–272. doi:10.1146/annurev-economics-080218-025739.
  3. ^ a b Dughmi, Shaddin; Xu, Haifeng (June 2016). "Algorithmic Bayesian persuasion". Proceedings of the forty-eighth annual ACM symposium on Theory of Computing. pp. 412–425. arXiv:1503.05988. doi:10.1145/2897518.2897583. ISBN 978-1-4503-4132-5.
  4. ^ Hedlund, Jonas (2017-01-01). "Bayesian persuasion by a privately informed sender". Journal of Economic Theory. 167: 229–268. doi:10.1016/j.jet.2016.11.003.
  5. ^ Kolotilin, Anton (2018-05-29). "Optimal information disclosure: A linear programming approach". Theoretical Economics. 13 (2): 607–635. doi:10.3982/TE1805. hdl:10419/197158.
  6. ^ Rayo, Luis; Segal, Ilya (2010-10-01). "Optimal Information Disclosure". Journal of Political Economy. 118 (5): 949–987. doi:10.1086/657922.
  7. ^ Camara, Modibo K.; Hartline, Jason D.; Johnsen, Aleck (2020-11-01). "Mechanisms for a No-Regret Agent: Beyond the Common Prior". 2020 IEEE 61st Annual Symposium on Foundations of Computer Science (FOCS). IEEE. pp. 259–270. arXiv:2009.05518. doi:10.1109/focs46700.2020.00033. ISBN 978-1-7281-9621-3.
  8. ^ Gentzkow, Matthew; Kamenica, Emir (2016-10-18). "Competition in Persuasion". The Review of Economic Studies. 84: 300–322. doi:10.1093/restud/rdw052.
  9. ^ Gentzkow, Matthew; Shapiro, Jesse M. (2008). "Competition and Trust in the Market for News". Journal of Economic Perspectives. 22 (2): 133–154. doi:10.1257/jep.22.2.133.
  10. ^ Li, Fei; Norman, Peter (2021). "Sequential Persuasion". Theoretical Economics. 16 (2): 639–675. doi:10.3982/TE3474.
  11. ^ Bergemann, Dirk; Morris, Stephen (2019-03-01). "Information Design: A Unified Perspective". Journal of Economic Literature. 57: 44–95. doi:10.1257/jel.20181489.
  12. ^ Ely, Jeffrey C. (January 2017). "Beeps". American Economic Review. 107 (1): 31–53. doi:10.1257/aer.20150218.
  13. ^ Goldstein, Itay; Leitner, Yaron (September 2018). "Stress tests and information disclosure". Journal of Economic Theory. 177: 34–69. doi:10.1016/j.jet.2018.05.013.
  14. ^ Boleslavsky, Raphael; Cotton, Christopher (May 2015). "Grading Standards and Education Quality". American Economic Journal: Microeconomics. 7 (2): 248–279. doi:10.1257/mic.20130080.
  15. ^ Habibi, Amir (January 2020). "Motivation and information design". Journal of Economic Behavior & Organization. 169: 1–18. doi:10.1016/j.jebo.2019.10.015.
  16. ^ Ely, Jeffrey; Frankel, Alexander; Kamenica, Emir (February 2015). "Suspense and Surprise". Journal of Political Economy. 123: 215–260. doi:10.1086/677350.
  17. ^ Bernasconi, Martino; Castiglioni, Matteo (2023). "Optimal Rates and Efficient Algorithms for Online Bayesian Persuasion". Proceedings of Machine Learning Research. 202: 2164–2183. arXiv:2303.01296.