Misplaced Pages

Krichevsky–Trofimov estimator

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
This article relies largely or entirely on a single source. Relevant discussion may be found on the talk page. Please help improve this article by introducing citations to additional sources.
Find sources: "Krichevsky–Trofimov estimator" – news · newspapers · books · scholar · JSTOR (March 2024)

In information theory, given an unknown stationary source π with alphabet A and a sample w from π, the Krichevsky–Trofimov (KT) estimator produces an estimate pi(w) of the probability of each symbol i ∈ A. This estimator is optimal in the sense that it minimizes the worst-case regret asymptotically.

For a binary alphabet and a string w with m zeroes and n ones, the KT estimator pi(w) is defined as:

p 0 ( w ) = m + 1 / 2 m + n + 1 , p 1 ( w ) = n + 1 / 2 m + n + 1 . {\displaystyle {\begin{aligned}p_{0}(w)&={\frac {m+1/2}{m+n+1}},\\p_{1}(w)&={\frac {n+1/2}{m+n+1}}.\end{aligned}}}

This corresponds to the posterior mean of a Beta-Bernoulli posterior distribution with prior 1 / 2 {\displaystyle 1/2} . For the general case the estimate is made using a Dirichlet-Categorical distribution.

See also

References

  1. Krichevsky, R. E.; Trofimov, V. K. (1981). "The Performance of Universal Encoding". IEEE Trans. Inf. Theory. IT-27 (2): 199–207. doi:10.1109/TIT.1981.1056331.


Stub icon

This probability-related article is a stub. You can help Misplaced Pages by expanding it.

Categories: