Misplaced Pages

Fine-grained reduction

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

In computational complexity theory, a fine-grained reduction is a transformation from one computational problem to another, used to relate the difficulty of improving the time bounds for the two problems. Intuitively, it provides a method for solving one problem efficiently by using the solution to the other problem as a subroutine. If problem A {\displaystyle A} can be solved in time a ( n ) {\displaystyle a(n)} and problem B {\displaystyle B} can be solved in time b ( n ) {\displaystyle b(n)} , then the existence of an ( a , b ) {\displaystyle (a,b)} -reduction from problem A {\displaystyle A} to problem B {\displaystyle B} implies that any significant speedup for problem B {\displaystyle B} would also lead to a speedup for problem A {\displaystyle A} .

Definition

Let A {\displaystyle A} and B {\displaystyle B} be computational problems, specified as the desired output for each possible input. Let a {\displaystyle a} and b {\displaystyle b} both be time-constructible functions that take an integer argument n {\displaystyle n} and produce an integer result. Usually, a {\displaystyle a} and b {\displaystyle b} are the time bounds for known or naive algorithms for the two problems, and often they are monomials such as n 2 {\displaystyle n^{2}} .

Then A {\displaystyle A} is said to be ( a , b ) {\displaystyle (a,b)} -reducible to B {\displaystyle B} if, for every real number ϵ > 0 {\displaystyle \epsilon >0} , there exists a real number δ > 0 {\displaystyle \delta >0} and an algorithm that solves instances of problem A {\displaystyle A} by transforming it into a sequence of instances of problem B {\displaystyle B} , taking time O ( a ( n ) 1 δ ) {\displaystyle O{\bigl (}a(n)^{1-\delta }{\bigr )}} for the transformation on instances of size n {\displaystyle n} , and producing a sequence of instances whose sizes n i {\displaystyle n_{i}} are bounded by i b ( n i ) 1 ϵ < a ( n ) 1 δ {\displaystyle \sum _{i}b(n_{i})^{1-\epsilon }<a(n)^{1-\delta }} .

An ( a , b ) {\displaystyle (a,b)} -reduction is given by the mapping from ϵ {\displaystyle \epsilon } to the pair of an algorithm and δ {\displaystyle \delta } .

Speedup implication

Suppose A {\displaystyle A} is ( a , b ) {\displaystyle (a,b)} -reducible to B {\displaystyle B} , and there exists ϵ > 0 {\displaystyle \epsilon >0} such that B {\displaystyle B} can be solved in time O ( b ( n ) 1 ϵ ) {\displaystyle O{\bigl (}b(n)^{1-\epsilon }{\bigr )}} . Then, with these assumptions, there also exists δ > 0 {\displaystyle \delta >0} such that A {\displaystyle A} can be solved in time O ( a ( n ) 1 δ ) {\displaystyle O{\bigl (}a(n)^{1-\delta }{\bigr )}} . Namely, let δ {\displaystyle \delta } be the value given by the ( a , b ) {\displaystyle (a,b)} -reduction, and solve A {\displaystyle A} by applying the transformation of the reduction and using the fast algorithm for B {\displaystyle B} for each resulting subproblem.

Equivalently, if A {\displaystyle A} cannot be solved in time significantly faster than a ( n ) {\displaystyle a(n)} , then B {\displaystyle B} cannot be solved in time significantly faster than b ( n ) {\displaystyle b(n)} .

History

Fine-grained reductions were defined, in the special case that a {\displaystyle a} and b {\displaystyle b} are equal monomials, by Virginia Vassilevska Williams and Ryan Williams in 2010. They also showed the existence of ( n 3 , n 3 ) {\displaystyle (n^{3},n^{3})} -reductions between several problems including all-pairs shortest paths, finding the second-shortest path between two given vertices in a weighted graph, finding negative-weight triangles in weighted graphs, and testing whether a given distance matrix describes a metric space. According to their results, either all of these problems have time bounds with exponents less than three, or none of them do.

The term "fine-grained reduction" comes from later work by Virginia Vassilevska Williams in an invited presentation at the 10th International Symposium on Parameterized and Exact Computation.

Although the original definition of fine-grained reductions involved deterministic algorithms, the corresponding concepts for randomized algorithms and nondeterministic algorithms have also been considered.

References

  1. ^ Williams, Virginia V. (2015), "Hardness of easy problems: basing hardness on popular conjectures such as the Strong Exponential Time Hypothesis", 10th International Symposium on Parameterized and Exact Computation, LIPIcs. Leibniz Int. Proc. Inform., vol. 43, Schloss Dagstuhl. Leibniz-Zent. Inform., Wadern, pp. 17–29, MR 3452406
  2. Williams, Virginia Vassilevska; Williams, R. Ryan (2018), "Subcubic equivalences between path, matrix, and triangle problems", Journal of the ACM, 65 (5): A27:1–A27:38, doi:10.1145/3186893, hdl:1721.1/134750, MR 3856539. A preliminary version of these results, including the definition of a "subcubic reduction", a special case of a fine-grained reduction, was presented at the 2010 Symposium on Foundations of Computer Science.
  3. Carmosino, Marco L.; Gao, Jiawei; Impagliazzo, Russell; Mihajlin, Ivan; Paturi, Ramamohan; Schneider, Stefan (2016), "Nondeterministic extensions of the strong exponential time hypothesis and consequences for non-reducibility", ITCS'16—Proceedings of the 2016 ACM Conference on Innovations in Theoretical Computer Science, ACM, New York, pp. 261–270, MR 3629829
Category: