Misplaced Pages

Learning augmented algorithm

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

A learning augmented algorithm is an algorithm that can make use of a prediction to improve its performance. Whereas in regular algorithms just the problem instance is inputted, learning augmented algorithms accept an extra parameter. This extra parameter often is a prediction of some property of the solution. This prediction is then used by the algorithm to improve its running time or the quality of its output.

Description

A learning augmented algorithm typically takes an input ( I , A ) {\displaystyle ({\mathcal {I}},{\mathcal {A}})} . Here I {\displaystyle {\mathcal {I}}} is a problem instance and A {\displaystyle {\mathcal {A}}} is the advice: a prediction about a certain property of the optimal solution. The type of the problem instance and the prediction depend on the algorithm. Learning augmented algorithms usually satisfy the following two properties:

  • Consistency. A learning augmented algorithm is said to be consistent if the algorithm can be proven to have a good performance when it is provided with an accurate prediction. Usually, this is quantified by giving a bound on the performance that depends on the error in the prediction.
  • Robustnesss. An algorithm is called robust if its worst-case performance can be bounded even if the given prediction is inaccurate.

Learning augmented algorithms generally do not prescribe how the prediction should be done. For this purpose machine learning can be used.

Examples

Binary search

The binary search algorithm is an algorithm for finding elements of a sorted list x 1 , , x n {\displaystyle x_{1},\ldots ,x_{n}} . It needs O ( log ( n ) ) {\displaystyle O(\log(n))} steps to find an element with some known value x {\displaystyle x} in a list of length n {\displaystyle n} . With a prediction i {\displaystyle i} for the position of x {\displaystyle x} , the following learning augmented algorithm can be used.

  • First, look at position i {\displaystyle i} in the list. If x i = y {\displaystyle x_{i}=y} , the element has been found.
  • If x i < y {\displaystyle x_{i}<y} , look at positions i + 1 , i + 2 , i + 4 , {\displaystyle i+1,i+2,i+4,\ldots } until an index j {\displaystyle j} with x j y {\displaystyle x_{j}\geq y} is found.
    • Now perform a binary search on x i , , x j {\displaystyle x_{i},\ldots ,x_{j}} .
  • If x i > y {\displaystyle x_{i}>y} , do the same as in the previous case, but instead consider i 1 , i 2 , i 4 , {\displaystyle i-1,i-2,i-4,\ldots } .

The error is defined to be η = | i i | {\displaystyle \eta =|i-i^{*}|} , where i {\displaystyle i^{*}} is the real index of y {\displaystyle y} . In the learning augmented algorithm, probing the positions i + 1 , i + 2 , i + 4 , {\displaystyle i+1,i+2,i+4,\ldots } takes log 2 ( η ) {\displaystyle \log _{2}(\eta )} steps. Then a binary search is performed on a list of size at most 2 η {\displaystyle 2\eta } , which takes log 2 ( η ) {\displaystyle \log _{2}(\eta )} steps. This makes the total running time of the algorithm 2 log 2 ( η ) {\displaystyle 2\log _{2}(\eta )} . So, when the error is small, the algorithm is faster than a normal binary search. This shows that the algorithm is consistent. Even in the worst case, the error will be at most n {\displaystyle n} . Then the algorithm takes at most O ( log ( n ) ) {\displaystyle O(\log(n))} steps, so the algorithm is robust.

More examples

Learning augmented algorithms are known for:

See also

References

  1. ^ Mitzenmacher, Michael; Vassilvitskii, Sergei (31 December 2020). "Algorithms with Predictions". Beyond the Worst-Case Analysis of Algorithms. Cambridge University Press. pp. 646–662. arXiv:2006.09123. doi:10.1017/9781108637435.037.
  2. Wang, Shufan; Li, Jian; Wang, Shiqiang (2020). "Online Algorithms for Multi-shop Ski Rental with Machine Learned Advice". NIPS'20: Proceedings of the 34th International Conference on Neural Information Processing Systems. arXiv:2002.05808. ISBN 1-71382-954-1. OCLC 1263313383.
  3. Dinitz, Michael; Im, Sungjin; Lavastida, Thomas; Benjamin, Benjamin; Vassilvitskii, Sergei (2021). "Faster Matchings via Learned Duals". Advances in Neural Information Processing Systems (PDF). Curran Associates, Inc.
  4. Bansal, Nikhil; Coester, Christian; Kumar, Ravi; Purohit, Manish; Vee, Erik (January 2022). "Learning-Augmented Weighted Paging". Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA). Society for Industrial and Applied Mathematics. pp. 67–89. doi:10.1137/1.9781611977073.4.

External links

Categories: