Jump to content

Yao's principle

From Wikipedia, the free encyclopedia
(Redirected from Yao's minimax principle)

In computational complexity theory, Yao's principle (also called Yao's minimax principle or Yao's lemma) relates the performance of randomized algorithms to deterministic (non-random) algorithms. It states that, for certain classes of algorithms, and certain measures of the performance of the algorithms, the following two quantities are equal:

  • The optimal performance that can be obtained by a deterministic algorithm on a random input, for a probability distribution on inputs chosen to be as hard as possible and for an algorithm chosen to work as well as possible against that distribution
  • The optimal performance that can be obtained by a random algorithm on a deterministic input, for an algorithm chosen to have the best performance on its worst case inputs, and the worst case input to the algorithm

Yao's principle is often used to prove limitations on the performance of randomized algorithms, by finding a probability distribution on inputs that is difficult for deterministic algorithms, and inferring that randomized algorithms have the same limitation on their worst case performance.

This principle is named after Andrew Yao, who first proposed it in a 1977 paper.[1] It is closely related to the minimax theorem in the theory of zero-sum games, and to the duality theory of linear programs.

Formulation

[edit]

Consider any cost or performance measure of an algorithm that can be expressed as an expected value, over a distribution of random inputs to the algorithm or of random behaviors of the algorithm, of some outcome. For instance, the expected time of an algorithm is the expected value of the time taken by a single run of the algorithm. Consider, also, a class of deterministic algorithms for which the number of distinct behaviors that are possible, for inputs of a given size, is finite, such as the decision tree model, and in which a randomized algorithm may be interpreted as generating an arbitrary probability distribution over these same potential behaviors. Let be the class of randomized algorithms, defined in this way. Let denote, for each input size , a finite set of inputs of size , and let measure the cost or performance of algorithm on input . Let denote the class of probability distributions on . Then, Yao's principle states that:

Here, finiteness of the class of algorithms and inputs (for a fixed input size) allows and to be interpreted as compact spaces (simplices of probability vectors), implying that the minima and maxima in these formulas exist. The optimally-hard input distribution maximizing the left hand side of this equality and the optimal randomized algorithm minimizing the right hand side of the equality may not be easy to describe, as their calculation may involve the solution of very large linear programs.[2][3] Nevertheless, the same principle can be used to convert any input distribution (regardless of its optimality) into a lower bound on the cost of all randomized algorithms, regardless of their implementability. It follows from the equality above that, if is any specific choice of a hard input distribution, and is any specific randomized algorithm in , then That is, the best possible deterministic performance against distribution is a lower bound for the performance of algorithm against its worst-case input, regardless of what is. One may also observe this weaker version of Yao's principle directly, as by linearity of expectation and the principle that for any distribution. By avoiding maximization and minimization over and , this version of Yao's principle can apply in some cases where or are not finite.[4]

Applications and examples

[edit]

Time complexity

[edit]

When the cost denotes the running time of an algorithm, Yao's principle states that the best possible running time of a deterministic algorithm, on a hard input distribution, gives a lower bound for the expected time of any Las Vegas algorithm on its worst-case input. Here, a Las Vegas algorithm is a randomized algorithm whose runtime may vary, but for which the result is always correct.[5][6] For example, this form of Yao's principle has been used to prove the optimality of certain Monte Carlo tree search algorithms for the exact evaluation of game trees.[6]

Query complexity

[edit]

One of the original applications by Yao of his principle was to the evasiveness of graph properties, the number of tests of the adjacency of pairs of vertices needed to determine whether a graph has a given property, when the only access to the graph is through such tests. As Yao showed, for graph properties that are true of the empty graph but false for some other graph on vertices with only a bounded number of edges, such as planarity, a randomized algorithm must probe a quadratic number of pairs of vertices. The proof uses Yao's principle, with a hard distribution of inputs consisting of permutations of the graph that does not have the property.[1]

The complexity of comparison-based sorting and selection algorithms is often measured in the number of comparisons made between pairs of data elements. For these problems, an averaging argument identifies the hardest input distributions: they are the distributions on distinct elements for which all permutations are equally likely. Yao's principle extends lower bounds for the average case number of comparisons made by deterministic algorithms, for this input distribution, to the worst case analysis of randomized comparison algorithms.[1]

In black-box optimization, the problem is to determine the minimum or maximum value of a function, from a given class of functions, accessible only through calls to the function on arguments from some finite domain. In this case, the cost to be optimized is the number of calls. Yao's principle has been described as "the only method available for proving lower bounds for all randomized search heuristics for selected classes of problems".[3]

Communication complexity

[edit]

In communication complexity, an algorithm describes a communication protocol between two or more parties, and its cost may be the number of bits or messages transmitted between the parties. In this case, Yao's principle describes an equality between the average-case complexity of deterministic communication protocols, on an input distribution that is the worst case for the problem, and the expected communication complexity of randomized protocols on their worst-case inputs.[2][7]

Online algorithms

[edit]

Yao's principle has also been applied to the competitive ratio of online algorithms. An online algorithm must respond to a sequence of requests, without knowledge of future requests, incurring some cost or profit per request depending on its choices. The competitive ratio is the ratio of its cost or profit to the value that could be achieved achieved by an offline algorithm with access to knowledge of all future requests, for a worst-case request sequence that causes this ratio to be as far from one as possible. Here, one must be careful to formulate the ratio with the algorithm's performance in the numerator and the optimal performance of an offline algorithm in the denominator, so that the cost measure can be formulated as an expected value rather than as the reciprocal of an expected value.[4]

An example given by Borodin & El-Yaniv (2005) concerns page replacement algorithms, which respond to requests for pages of computer memory by using a cache of pages, for a given parameter . If a request matches a cached page, it costs nothing; otherwise one of the cached pages must be replaced by the requested page, at a cost of one page fault. A difficult distribution of request sequences for this model can be generated by choosing each request uniformly at random from a pool of pages. For any deterministic algorithm, the expected number of page faults, over a sequence of pages, is . However, an offline algorithm can divide the request sequence into phases within which only pages are used, incurring only one fault at the start of a phase to replace the one page that is unused within the phase. By standard analysis of the coupon collector's problem, the expected number of requests in a phase is , where is the th harmonic number. It follows from renewal theory that the optimal number of page faults behaves in the limit for large as , and therefore that the competitive ratio of any deterministic algorithm against this input distribution is . By Yao's principle, is also a lower bound on the competitive ratio of any randomized page replacement algorithm against a request sequence chosen by an oblivious adversary to be a worst case for the algorithm but without knowledge of the algorithm's random choices.[8]

Game-theoretic interpretation

[edit]

Yao's principle may be interpreted in game theoretic terms, via a two-player zero-sum game in which one player, Alice, selects a deterministic algorithm, the other player, Bob, selects an input, and the payoff is the cost of the selected algorithm on the selected input. Any randomized algorithm R may be interpreted as a randomized choice among deterministic algorithms, and thus as a mixed strategy for Alice. Similarly, a non-random algorithm may be thought of as a pure strategy for Alice. In any two-player zero-sum game, if one player fixes a fixed mixed strategy, then there is an optimal pure strategy that the other player can choose against it. By the minimax theorem of John von Neumann, there exists a game value , and mixed strategies for each player, such that the players can guarantee expected value or better by playing those strategies, and such that the optimal pure strategy against either mixed strategy produces expected value exactly . Thus, the minimax mixed strategy for Alice, set against the best opposing pure strategy for Bob, produces the same expected game value as the minimax mixed strategy for Bob, set against the best opposing pure strategy for Alice. This equality of expected game values, for the game described above, is Yao's principle in its form as an equality.[4] Yao's 1977 paper, originally formulating Yao's principle, proved it in this way.[1]

References

[edit]
  1. ^ a b c d Yao, Andrew (1977), "Probabilistic computations: Toward a unified measure of complexity", Proceedings of the 18th IEEE Symposium on Foundations of Computer Science (FOCS), pp. 222–227, doi:10.1109/SFCS.1977.24
  2. ^ a b Fortnow, Lance (October 16, 2006), "Favorite theorems: Yao principle", Computational Complexity
  3. ^ a b Wegener, Ingo (2005), "9.2 Yao's minimax principle", Complexity Theory: Exploring the Limits of Efficient Algorithms, Springer-Verlag, pp. 118–120, doi:10.1007/3-540-27477-4, ISBN 978-3-540-21045-0, MR 2146155
  4. ^ a b c Borodin, Allan; El-Yaniv, Ran (2005), "8.3 Yao's principle: A technique for obtaining lower bounds", Online Computation and Competitive Analysis, Cambridge University Press, pp. 115–120, ISBN 9780521619462
  5. ^ Moore, Cristopher; Mertens, Stephan (2011), "Theorem 10.1 (Yao's principle)", The Nature of Computation, Oxford University Press, p. 471, ISBN 9780199233212
  6. ^ a b Motwani, Rajeev; Raghavan, Prabhakar (2010), "Chapter 12: Randomized Algorithms", in Atallah, Mikhail J.; Blanton, Marina (eds.), Algorithms and Theory of Computation Handbook: General Concepts and Techniques (2nd ed.), CRC Press, pp. 12-1 – 12-24; see in particular Section 12.5: The minimax principle and lower bounds, pp. 12-8 – 12-10
  7. ^ Wigderson, Avi (2019), Mathematics and Computation: A Theory Revolutionizing Technology and Science, Princeton University Press, p. 210, ISBN 9780691189130
  8. ^ Borodin & El-Yaniv (2005), pp. 120–122, 8.4 Paging revisited.

Further reading

[edit]
  • Ben-David, Shalev; Blais, Eric (2023), "A new minimax theorem for randomized algorithms", Journal of the ACM, 70 (6) 38, arXiv:2002.10802, doi:10.1145/3626514, MR 4679504
  • de Graaf, Mart; de Wolf, Ronald (2002), "On quantum versions of the Yao principle", in Alt, Helmut; Ferreira, Afonso (eds.), STACS 2002, 19th Annual Symposium on Theoretical Aspects of Computer Science, Antibes – Juan les Pins, France, March 14–16, 2002, Proceedings, Lecture Notes in Computer Science, vol. 2285, Springer, pp. 347–358, arXiv:quant-ph/0109070, doi:10.1007/3-540-45841-7_28