Misplaced Pages

Yao's principle

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Equivalence of average-case and expected complexity

In computational complexity theory, Yao's principle (also called Yao's minimax principle or Yao's lemma) relates the performance of randomized algorithms to deterministic (non-random) algorithms. It states that, for certain classes of algorithms, and certain measures of the performance of the algorithms, the following two quantities are equal:

  • The optimal performance that can be obtained by a deterministic algorithm on a random input (its average-case complexity), for a probability distribution on inputs chosen to be as hard as possible and for an algorithm chosen to work as well as possible against that distribution
  • The optimal performance that can be obtained by a random algorithm on a deterministic input (its expected complexity), for an algorithm chosen to have the best performance on its worst case inputs, and the worst case input to the algorithm

Yao's principle is often used to prove limitations on the performance of randomized algorithms, by finding a probability distribution on inputs that is difficult for deterministic algorithms, and inferring that randomized algorithms have the same limitation on their worst case performance.

This principle is named after Andrew Yao, who first proposed it in a 1977 paper. It is closely related to the minimax theorem in the theory of zero-sum games, and to the duality theory of linear programs.

Formulation

Consider any cost measure c ( A , x ) {\displaystyle c(A,x)} of an algorithm A {\displaystyle A} on an input x {\displaystyle x} , such as its running time, for which we want to study the expected value over randomized algorithms and random inputs. Consider, also, any finite set A {\displaystyle {\mathcal {A}}} of deterministic algorithms (made finite, for instance, by limiting the algorithms to a specific input size), and a finite set X {\displaystyle {\mathcal {X}}} of inputs to these algorithms. Let R {\displaystyle {\mathcal {R}}} denote the class of randomized algorithms obtained from probability distributions over the deterministic behaviors in A {\displaystyle {\mathcal {A}}} , and let D {\displaystyle {\mathcal {D}}} denote the class of probability distributions on inputs in X {\displaystyle {\mathcal {X}}} . Then, Yao's principle states that:

max D D min A A E x D [ c ( A , x ) ] = min R R max x X E [ c ( R , x ) ] . {\displaystyle \max _{D\in {\mathcal {D}}}\min _{A\in {\mathcal {A}}}\mathbb {E} _{x\sim D}=\min _{R\in {\mathcal {R}}}\max _{x\in {\mathcal {X}}}\mathbb {E} .}

Here, E {\displaystyle \mathbb {E} } is notation for the expected value, and x D {\displaystyle x\sim D} means that x {\displaystyle x} is a random variable distributed according to D {\displaystyle D} . Finiteness of A {\displaystyle {\mathcal {A}}} and X {\displaystyle {\mathcal {X}}} allows D {\displaystyle {\mathcal {D}}} and R {\displaystyle {\mathcal {R}}} to be interpreted as simplices of probability vectors, whose compactness implies that the minima and maxima in these formulas exist.

A weaker form of the same principle, as an inequality rather than an equality, converts any hard input distribution for deterministic algorithms into a lower bound on the cost of all randomized algorithms. If D D {\displaystyle D\in {\mathcal {D}}} is any specific choice of a hard input distribution, and R {\displaystyle R} is any specific randomized algorithm in R {\displaystyle {\mathcal {R}}} , then min A A E x D [ c ( A , x ) ] max x X E [ c ( R , x ) ] . {\displaystyle \min _{A\in {\mathcal {A}}}\mathbb {E} _{x\sim D}\leq \max _{x\in {\mathcal {X}}}\mathbb {E} .} That is, the best possible deterministic performance against distribution D {\displaystyle D} is a lower bound for the performance of any randomized algorithm R {\displaystyle R} against its worst-case input. One may also observe this weaker version of Yao's principle directly, as min A A E x D [ c ( A , x ) ] E x D [ c ( R , x ) ] max x X E [ c ( R , x ) ] , {\displaystyle \min _{A\in {\mathcal {A}}}\mathbb {E} _{x\sim D}\leq \mathbb {E} _{x\sim D}\leq \max _{x\in {\mathcal {X}}}\mathbb {E} ,} by linearity of expectation and the principle that min E max {\displaystyle \min \leq \mathbb {E} \leq \max } for any distribution. By avoiding maximization and minimization over D {\displaystyle {\mathcal {D}}} and R {\displaystyle {\mathcal {R}}} , this version of Yao's principle can apply in some cases where X {\displaystyle {\mathcal {X}}} or A {\displaystyle {\mathcal {A}}} are not finite. However, the version of Yao's principle using an equality rather than an inequality can also be useful when proving lower bounds on randomized algorithms. The equality implies that there is no loss of generality in proving lower bounds in this way: whatever the actual best randomized algorithm might be, there is some input distribution through which a matching lower bound on its complexity can be proven.

Applications and examples

Time complexity

When the cost c {\displaystyle c} denotes the running time of an algorithm, Yao's principle states that the best possible running time of a deterministic algorithm, on a hard input distribution, gives a lower bound for the expected time of any Las Vegas algorithm on its worst-case input. Here, a Las Vegas algorithm is a randomized algorithm whose runtime may vary, but for which the result is always correct. For example, this form of Yao's principle has been used to prove the optimality of certain Monte Carlo tree search algorithms for the exact evaluation of game trees.

Comparisons

The time complexity of comparison-based sorting and selection algorithms is often studied using the number of comparisons between pairs of data elements as a proxy for the total time. For these problems, for any fixed number of elements, an input can be expressed as a permutation and a deterministic algorithm can be expressed as a decision tree, both finite in number as Yao's principle requires. A symmetrization argument identifies the hardest input distributions: they are the random permutations, the distributions on n {\displaystyle n} distinct elements for which all permutations are equally likely. This is because, if any other distribution were hardest, averaging it with all permutations of the same hard distribution would be equally hard, and would produce the distribution for a random permutation. Yao's principle extends lower bounds for the average case number of comparisons made by deterministic algorithms, for random permutations, to the worst case analysis of randomized comparison algorithms.

An example given by Yao is the analysis of algorithms for finding the k {\displaystyle k} th largest of a given set of n {\displaystyle n} values, the selection problem. Subsequently to Yao's work, Walter Cunto and Ian Munro showed that, for random permutations, any deterministic algorithm must perform at least n + min ( k , n k ) O ( 1 ) {\displaystyle n+\min(k,n-k)-O(1)} expected comparisons. By Yao's principle, the same number of comparisons must be made by randomized algorithms on their worst-case input. The Floyd–Rivest algorithm comes within O ( n log n ) {\displaystyle O({\sqrt {n\log n}})} comparisons of this bound.

Evasiveness of graph properties

Another of the original applications by Yao of his principle was to the evasiveness of graph properties, the number of tests of the adjacency of pairs of vertices needed to determine whether a graph has a given property, when the only access to the graph is through such tests. Richard M. Karp conjectured that every randomized algorithm for every nontrivial monotone graph property (a property that remains true for every subgraph of a graph with the property) requires a quadratic number of tests, but only weaker bounds have been proven.

As Yao stated, for graph properties that are true of the empty graph but false for some other graph on n {\displaystyle n} vertices with only a bounded number s {\displaystyle s} of edges, a randomized algorithm must probe a quadratic number of pairs of vertices. For instance, for the property of being a planar graph, s = 9 {\displaystyle s=9} because the 9-edge utility graph is non-planar. More precisely, Yao states that for these properties, at least ( 1 2 p ) 1 s ( n 2 ) {\displaystyle \left({\tfrac {1}{2}}-p\right){\tfrac {1}{s}}{\tbinom {n}{2}}} tests are needed, for every ε > 0 {\displaystyle \varepsilon >0} , for a randomized algorithm to have probability at most p {\displaystyle p} of making a mistake. Yao also used this method to show that quadratically many queries are needed for the properties of containing a given tree or clique as a subgraph, of containing a perfect matching, and of containing a Hamiltonian cycle, for small enough constant error probabilities.

Black-box optimization

In black-box optimization, the problem is to determine the minimum or maximum value of a function, from a given class of functions, accessible only through calls to the function on arguments from some finite domain. In this case, the cost to be optimized is the number of calls. Yao's principle has been described as "the only method available for proving lower bounds for all randomized search heuristics for selected classes of problems". Results that can be proven in this way include the following:

  • For Boolean functions on n {\displaystyle n} -bit binary strings that test whether the input equals some fixed but unknown string, the optimal expected number of function calls needed to find the unknown string is 2 n 1 + 1 2 {\displaystyle 2^{n-1}+{\tfrac {1}{2}}} . This can be achieved by a function that tests strings in a random order, and proved optimal by using Yao's principle on an input distribution that chooses a uniformly random function from this class.
  • A unimodal function f {\displaystyle f} from n {\displaystyle n} -bit binary strings to real numbers is defined by the following property: For each input string x {\displaystyle x} , either f ( x ) {\displaystyle f(x)} is the unique maximum value of f {\displaystyle f} , or x {\displaystyle x} can be changed in a single bit to a string y {\displaystyle y} with a larger value. Thus, a local search that changes one bit at a time when this produces a larger value will always eventually find the maximum value. Such a search may take exponentially many steps, but nothing significantly better is possible. For any randomized algorithm that performs 2 o ( n ) {\displaystyle 2^{o(n)}} queries, some function in this class will cause the algorithm to have an exponentially small probability of finding the maximum.

Communication complexity

In communication complexity, an algorithm describes a communication protocol between two or more parties, and its cost may be the number of bits or messages transmitted between the parties. In this case, Yao's principle describes an equality between the average-case complexity of deterministic communication protocols, on an input distribution that is the worst case for the problem, and the expected communication complexity of randomized protocols on their worst-case inputs.

An example described by Avi Wigderson (based on a paper by Manu Viola) is the communication complexity for two parties, each holding n {\displaystyle n} -bit input values, to determine which value is larger. For deterministic communication protocols, nothing better than n {\displaystyle n} bits of communication is possible, easily achieved by one party sending their whole input to the other. However, parties with a shared source of randomness and a fixed error probability can exchange 1-bit hash functions of prefixes of the input to perform a noisy binary search for the first position where their inputs differ, achieving O ( log n ) {\displaystyle O(\log n)} bits of communication. This is within a constant factor of optimal, as can be shown via Yao's principle with an input distribution that chooses the position of the first difference uniformly at random, and then chooses random strings for the shared prefix up to that position and the rest of the inputs after that position.

Online algorithms

Yao's principle has also been applied to the competitive ratio of online algorithms. An online algorithm must respond to a sequence of requests, without knowledge of future requests, incurring some cost or profit per request depending on its choices. The competitive ratio is the ratio of its cost or profit to the value that could be achieved achieved by an offline algorithm with access to knowledge of all future requests, for a worst-case request sequence that causes this ratio to be as far from one as possible. Here, one must be careful to formulate the ratio with the algorithm's performance in the numerator and the optimal performance of an offline algorithm in the denominator, so that the cost measure can be formulated as an expected value rather than as the reciprocal of an expected value.

An example given by Borodin & El-Yaniv (2005) concerns page replacement algorithms, which respond to requests for pages of computer memory by using a cache of k {\displaystyle k} pages, for a given parameter k {\displaystyle k} . If a request matches a cached page, it costs nothing; otherwise one of the cached pages must be replaced by the requested page, at a cost of one page fault. A difficult distribution of request sequences for this model can be generated by choosing each request uniformly at random from a pool of k + 1 {\displaystyle k+1} pages. Any deterministic online algorith has n k + 1 {\displaystyle {\tfrac {n}{k+1}}} expected page faults, over n {\displaystyle n} requests. Instead, an offline algorithm can divide the request sequence into phases within which only k {\displaystyle k} pages are used, incurring only one fault at the start of a phase to replace the one page that is unused within the phase. As an instance of the coupon collector's problem, the expected requests per phase is ( k + 1 ) H k {\displaystyle (k+1)H_{k}} , where H k = 1 + 1 2 + + 1 k {\displaystyle H_{k}=1+{\tfrac {1}{2}}+\cdots +{\tfrac {1}{k}}} is the k {\displaystyle k} th harmonic number. By renewal theory, the offline algorithm incurs n ( k + 1 ) H k + o ( n ) {\displaystyle {\tfrac {n}{(k+1)H_{k}}}+o(n)} page faults with high probability, so the competitive ratio of any deterministic algorithm against this input distribution is at least H k {\displaystyle H_{k}} . By Yao's principle, H k {\displaystyle H_{k}} also lower bounds the competitive ratio of any randomized page replacement algorithm against a request sequence chosen by an oblivious adversary to be a worst case for the algorithm but without knowledge of the algorithm's random choices.

For online problems in a general class related to the ski rental problem, Seiden has proposed a cookbook method for deriving optimally hard input distributions, based on certain parameters of the problem.

Relation to game theory and linear programming

Yao's principle may be interpreted in game theoretic terms, via a two-player zero-sum game in which one player, Alice, selects a deterministic algorithm, the other player, Bob, selects an input, and the payoff is the cost of the selected algorithm on the selected input. Any randomized algorithm R {\displaystyle R} may be interpreted as a randomized choice among deterministic algorithms, and thus as a mixed strategy for Alice. Similarly, a non-random algorithm may be thought of as a pure strategy for Alice. In any two-player zero-sum game, if one player chooses a mixed strategy, then the other player has an optimal pure strategy against it. By the minimax theorem of John von Neumann, there exists a game value c {\displaystyle c} , and mixed strategies for each player, such that the players can guarantee expected value c {\displaystyle c} or better by playing those strategies, and such that the optimal pure strategy against either mixed strategy produces expected value exactly c {\displaystyle c} . Thus, the minimax mixed strategy for Alice, set against the best opposing pure strategy for Bob, produces the same expected game value c {\displaystyle c} as the minimax mixed strategy for Bob, set against the best opposing pure strategy for Alice. This equality of expected game values, for the game described above, is Yao's principle in its form as an equality. Yao's 1977 paper, originally formulating Yao's principle, proved it in this way.

The optimal mixed strategy for Alice (a randomized algorithm) and the optimal mixed strategy for Bob (a hard input distribution) may each be computed using a linear program that has one player's probabilities as its variables, with a constraint on the game value for each choice of the other player. The two linear programs obtained in this way for each player are dual linear programs, whose equality is an instance of linear programming duality. However, although linear programs may be solved in polynomial time, the numbers of variables and constraints in these linear programs (numbers of possible algorithms and inputs) are typically too large to list explicitly. Therefore, formulating and solving these programs to find these optimal strategies is often impractical.

Extensions

For Monte Carlo algorithms, algorithms that use a fixed amount of computational resources but that may produce an erroneous result, a form of Yao's principle applies to the probability of an error, the error rate of an algorithm. Choosing the hardest possible input distribution, and the algorithm that achieves the lowest error rate against that distribution, gives the same error rate as choosing an optimal algorithm and its worst case input distribution. However, the hard input distributions found in this way are not robust to changes in the parameters used when applying this principle. If an input distribution requires high complexity to achieve a certain error rate, it may nevertheless have unexpectedly low complexity for a different error rate. Ben-David and Blais show that, for Boolean functions under many natural measures of computational complexity, there exists an input distribution that is simultaneously hard for all error rates.

Variant's of Yao's principle have also been considered for quantum computing. In place of randomized algorithms, one may consider quantum algorithms that have a good probability of computing the correct value for every input (probability at least 2 3 {\displaystyle {\tfrac {2}{3}}} ); this condition together with polynomial time defines the complexity class BQP. It does not make sense to ask for deterministic quantum algorithms, but instead one may consider algorithms that, for a given input distribution, have probability 1 of computing a correct answer, either in a weak sense that the inputs for which this is true have probability 2 3 {\displaystyle \geq {\tfrac {2}{3}}} , or in a strong sense in which, in addition, the algorithm must have probability 0 or 1 of generating any particular answer on the remaining inputs. For any Boolean function, the minimum complexity of a quantum algorithm that is correct with probability 2 3 {\displaystyle \geq {\tfrac {2}{3}}} against its worst-case input is less than or equal to the minimum complexity that can be attained, for a hard input distribution, by the best weak or strong quantum algorithm against that distribution. The weak form of this inequality is within a constant factor of being an equality, but the strong form is not.

References

  1. ^ Arora, Sanjeev; Barak, Boaz (2009), "Note 12.8: Yao's Min-Max Lemma", Computational Complexity: A Modern Approach, Cambridge University Press, p. 265, ISBN 9780511530753
  2. ^ Yao, Andrew (1977), "Probabilistic computations: Toward a unified measure of complexity", Proceedings of the 18th IEEE Symposium on Foundations of Computer Science (FOCS), pp. 222–227, doi:10.1109/SFCS.1977.24
  3. ^ Laraki, Rida; Renault, Jérôme; Sorin, Sylvain (2019), "2.3 The Minmax Theorem", Mathematical Foundations of Game Theory, Universitext, Springer, pp. 16–18, doi:10.1007/978-3-030-26646-2, ISBN 978-3-030-26646-2
  4. Bohnenblust, H. F.; Karlin, S.; Shapley, L. S. (1950), "Solutions of discrete, two-person games", in Kuhn, Harold W.; Tucker, Albert William (eds.), Contributions to the Theory of Games, Annals of Mathematics Studies, vol. 24, Princeton University Press, pp. 51–72, doi:10.1515/9781400881727-006, ISBN 978-1-4008-8172-7, MR 0039218
  5. ^ Borodin, Allan; El-Yaniv, Ran (2005), "8.3 Yao's principle: A technique for obtaining lower bounds", Online Computation and Competitive Analysis, Cambridge University Press, pp. 115–120, ISBN 9780521619462
  6. ^ Wigderson, Avi (2019), Mathematics and Computation: A Theory Revolutionizing Technology and Science, Princeton University Press, p. 210, ISBN 9780691189130
  7. Moore, Cristopher; Mertens, Stephan (2011), "Theorem 10.1 (Yao's principle)", The Nature of Computation, Oxford University Press, p. 471, ISBN 9780199233212
  8. ^ Motwani, Rajeev; Raghavan, Prabhakar (2010), "Chapter 12: Randomized Algorithms", in Atallah, Mikhail J.; Blanton, Marina (eds.), Algorithms and Theory of Computation Handbook: General Concepts and Techniques (2nd ed.), CRC Press, pp. 12-1 – 12-24; see in particular Section 12.5: The minimax principle and lower bounds, pp. 12-8 – 12-10
  9. Cunto, Walter; Munro, J. Ian (1989), "Average case selection", Journal of the ACM, 36 (2): 270–279, doi:10.1145/62044.62047, MR 1072421, S2CID 10947879
  10. Chan, Timothy M. (2010), "Comparison-based time-space lower bounds for selection", ACM Transactions on Algorithms, 6 (2): A26:1–A26:16, doi:10.1145/1721837.1721842, MR 2675693, S2CID 11742607
  11. Knuth, Donald E. (1998), "Section 5.3.3: Minimum-comparison selection", The Art of Computer Programming, Volume 3: Sorting and Searching (2nd ed.), Addison-Wesley, pp. 207–219, ISBN 0-201-89685-0
  12. Chakrabarti, Amit; Khot, Subhash (2007), "Improved lower bounds on the randomized complexity of graph properties", Random Structures & Algorithms, 30 (3): 427–440, doi:10.1002/rsa.20164, MR 2309625, S2CID 8384071
  13. ^ Wegener, Ingo (2005), "9.2 Yao's minimax principle", Complexity Theory: Exploring the Limits of Efficient Algorithms, Springer-Verlag, pp. 118–120, doi:10.1007/3-540-27477-4, ISBN 978-3-540-21045-0, MR 2146155
  14. ^ Fortnow, Lance (October 16, 2006), "Favorite theorems: Yao principle", Computational Complexity
  15. Viola, Emanuele (2015), "The communication complexity of addition", Combinatorica, 35 (6): 703–747, doi:10.1007/s00493-014-3078-3, MR 3439794
  16. Borodin & El-Yaniv (2005), pp. 120–122, 8.4 Paging revisited.
  17. Seiden, Steven S. (2000), "A guessing game and randomized online algorithms", in Yao, F. Frances; Luks, Eugene M. (eds.), Proceedings of the Thirty-Second Annual ACM Symposium on Theory of Computing, May 21–23, 2000, Portland, OR, USA, pp. 592–601, doi:10.1145/335305.335385, ISBN 1-58113-184-4
  18. Ben-David, Shalev; Blais, Eric (2023), "A new minimax theorem for randomized algorithms", Journal of the ACM, 70 (6) 38, arXiv:2002.10802, doi:10.1145/3626514, MR 4679504
  19. de Graaf, Mart; de Wolf, Ronald (2002), "On quantum versions of the Yao principle", in Alt, Helmut; Ferreira, Afonso (eds.), STACS 2002, 19th Annual Symposium on Theoretical Aspects of Computer Science, Antibes – Juan les Pins, France, March 14–16, 2002, Proceedings, Lecture Notes in Computer Science, vol. 2285, Springer, pp. 347–358, arXiv:quant-ph/0109070, doi:10.1007/3-540-45841-7_28, ISBN 978-3-540-43283-8
Categories: