User:Erel Segal/Pure existence theorem of algorithm
The vast majority of positive results about computational problems are constructive proofs, i.e., a computational problem is proved to be solvable by showing an algorithm that solves it; a computational problem is shown to be in P (complexity) by showing an algorithm that solves it in time that is polynomial in the size of the input; etc.
However, there are several rare non-constructive results, where an algorithm is proved to exist without showing the algorithm itself. Several techniques are used to provide such existence proofs.
Law of excluded middle
[edit]Sometimes there are two possible algorithms for solving a problem. We don't know which of them is correct, but by the law of excluded middle, we know that one of them must be correct, so the problem is solvable.
For example, consider the following constant function f:
Goldbach's conjecture is either true or false. If it is true then f is 1, and the required algorithm is just "print 1". If it is false then the required algorithm is just "print 0". In either case, there is a simple, one-line algorithm that prints f, so by definition, f is computationally decidable. It is true that we don't know which algorithm to use, but we do know that an algorithm exists.
A slightly more complicated example is:
The function f is computable because there are only two possibilities to consider:
- For every positive integer n, the string appears in the decimal representation of . In this case, the algorithm that always returns 1 is always correct.
- There is a largest integer N such that appears in the decimal representation of . In this case the following algorithm (with the value hard-coded) is always correct:
- Zeros-in-pi(n):
- if (n > N) then return 0 else return 1
- Zeros-in-pi(n):
We have no idea which of these possibilities is correct, or what value of N is the right one in the second case. Nevertheless, one of these algorithms is guaranteed to be correct. Thus, there is an algorithm to decide whether a string of n zeros appears in ; the problem is decidable.
Using an unknown finite set
[edit]Some problems can be solved using an algorithm that uses, as a parameter, an unknown set. Although the set is unknown, we know that it must be finite, and thus a polynomial-time algorithm exists.
Some useful examples come from graph theory.[1] A common question in graph theory is whether a certain input graph has a certain property. For example:
- Input: a graph G.
- Question: Can G be embedded in a 3-dimensional space, such that no two disjoint cycles of G are topologically linked (as in links of a chain)?
There is a highly exponential algorithm that decides whether two cycles embedded in a 3d-space are linked, and one could test all pairs of cycles in the graph, but it is not obvious how to account for all possible embeddings in a 3d-space. Thus, it is a-priori not clear at all if the linkedness problem is decidable.
However, there is a non-constructive proof that shows that linkedness is decidable in polynomial time. The proof relies on the following facts:
- The set of graphs for which the answer is "yes" is closed under taking minors. I.e., if a graph G can be embedded linklessly in 3-d space, then every minor of G can also be embedded linklessly.
- For every two graphs G and H, it is possible to find in polynomial time whether H is a minor of G.
- By Robertson–Seymour theorem, any set of finite graphs contains only a finite number of minor-minimal elements. In particular, the set of "yes" instances has a finite number of minor-minimal elements.
Given an input graph G, the following "algorithm" solves the above problem:
- For every minor-minimal element H:
- If H is a minor of G then return "yes".
- return "no".
- For every minor-minimal element H:
The non-constructive part here is the Robertson–Seymour theorem. Although it guarantees that there is a finite number of minor-minimal elements it does not tell us what these elements are. Therefore, we cannot really execute the "algorithm" mentioned above. But, we do know that an algorithm exists and that its runtime is polynomial.
There are many more similar problems whose decidability can be proved in a similar way. In some cases, the knowledge that a problem can be proved in a polynomial time has lead researchers to search and find an actual polynomial-time algorithm that solves the problem in an entirely different way. This shows that non-constructive proofs can have constructive outcomes.[1]
Counting the algorithms
[edit]Sometimes the number of potential algorithms for a given problem is finite. We can count the number of possible algorithms and prove that only a bounded number of them are "bad", so at least one algorithm must be "good".
As an example, consider the following problem.[2]
I select a vector v composed of n elements which are integers between 0 and a certain constant d.
You have to guess v by asking sum queries, which are queries of the form: "what is the sum of the elements with indices i and j?". A sum query can relate to any number of indices from 1 to n.
How many queries do you need? Obviously, n queries are always sufficient, because you can use n queries asking for the "sum" of a single element. But when d is sufficiently small, it is possible to do better. The general idea is as follows.
Every query can be represented as a 1-by-n vector whose elements are all in the set {0,1}. The response to the query is just the dot product of the query vector by v. Every set of k queries can be represented by an k-by-n matrix over {0,1}; the set of responses is the product of the matrix by v.
A matrix M is "good" if it enables us to uniquely identify v. This means that, for every vector v, the product M v is different. A matrix M is "bad" if there are two different vectors, v and u, such that M v = M u.
Using some algebra, it is possible to bound the number of "bad" matrices. The bound is a function of d and k. Thus, for a sufficiently small d, there must be a "good" matrix with a small k, which corresponds to an efficient algorithm for solving the identification problem.
This proof is non-constructive in two ways: it is not known how to find a good matrix; and even if a good matrix is supplied, it is not known how to efficiently re-construct the vector from the query replies.
There are many more similar problems which can be proved to be solvable in a similar way.[2]
References
[edit]- ^ a b Fellows, M. R.; Langston, M. A. (1988). "Nonconstructive tools for proving polynomial-time decidability". Journal of the ACM. 35 (3): 727. doi:10.1145/44483.44491.
- ^ a b Grebinski, V.; Kucherov, G. (2000). "Optimal Reconstruction of Graphs under the Additive Model". Algorithmica. 28: 104. doi:10.1007/s004530010033.
- Brown, D. J.; Fellows, M. R.; Langston, M. A. (2007). "Polynomial-time self-reducibility: Theoretical motivations and practical results∗". International Journal of Computer Mathematics. 31: 1. doi:10.1080/00207168908803783.
- Kimmel, S. (2013). "Quantum Adversary (Upper) Bound". Chicago Journal of Theoretical Computer Science. 19: 1–14. doi:10.4086/cjtcs.2013.004.
Credits
[edit]The references in this page were collected from the following Stack Exchange threads:
- "How can it be decidable whether π has some sequence of digits?". Computer Science Stack Exchange. Retrieved 21 November 2014.
- "Are there problems without efficient algorithms, where existence theorems have proved such algorithms must exist?". CS Theory Stack Exchange. Retrieved 21 November 2014.
- "Are there non-constructive algorithm existence proofs?". CS Theory Stack Exchange. Retrieved 21 November 2014.
- "Is there an algorithm that provably exists although we don't know what it is?". Computer Science Stack Exchange. Retrieved 21 November 2014.
See also
[edit]Category:Computational complexity theory Category:Constructivism (mathematics)