Misplaced Pages

Probabilistic Turing machine

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Probabilistic Turing Machine) Mathematical model of computation

Turing machines
Machine
Variants
Science

In theoretical computer science, a probabilistic Turing machine is a non-deterministic Turing machine that chooses between the available transitions at each point according to some probability distribution. As a consequence, a probabilistic Turing machine can—unlike a deterministic Turing Machine—have stochastic results; that is, on a given input and instruction state machine, it may have different run times, or it may not halt at all; furthermore, it may accept an input in one execution and reject the same input in another execution.

In the case of equal probabilities for the transitions, probabilistic Turing machines can be defined as deterministic Turing machines having an additional "write" instruction where the value of the write is uniformly distributed in the Turing machine's alphabet (generally, an equal likelihood of writing a "1" or a "0" on to the tape). Another common reformulation is simply a deterministic Turing machine with an added tape full of random bits called the "random tape".

A quantum computer (or quantum Turing machine) is another model of computation that is inherently probabilistic.

Description

A probabilistic Turing machine is a type of nondeterministic Turing machine in which each nondeterministic step is a "coin-flip", that is, at each step there are two possible next moves and the Turing machine probabilistically selects which move to take.

Formal definition

A probabilistic Turing machine can be formally defined as the 7-tuple M = ( Q , Σ , Γ , q 0 , A , δ 1 , δ 2 ) {\displaystyle M=(Q,\Sigma ,\Gamma ,q_{0},A,\delta _{1},\delta _{2})} , where

  • Q {\displaystyle Q} is a finite set of states
  • Σ {\displaystyle \Sigma } is the input alphabet
  • Γ {\displaystyle \Gamma } is a tape alphabet, which includes the blank symbol #
  • q 0 Q {\displaystyle q_{0}\in Q} is the initial state
  • A Q {\displaystyle A\subseteq Q} is the set of accepting (final) states
  • δ 1 : Q × Γ Q × Γ × { L , R } {\displaystyle \delta _{1}:Q\times \Gamma \to Q\times \Gamma \times \{L,R\}} is the first probabilistic transition function. L {\displaystyle L} is a movement one cell to the left on the Turing machine's tape and R {\displaystyle R} is a movement one cell to the right.
  • δ 2 : Q × Γ Q × Γ × { L , R } {\displaystyle \delta _{2}:Q\times \Gamma \to Q\times \Gamma \times \{L,R\}} is the second probabilistic transition function.

At each step, the Turing machine probabilistically applies either the transition function δ 1 {\displaystyle \delta _{1}} or the transition function δ 2 {\displaystyle \delta _{2}} . This choice is made independently of all prior choices. In this way, the process of selecting a transition function at each step of the computation resembles a coin flip.

The probabilistic selection of the transition function at each step introduces error into the Turing machine; that is, strings which the Turing machine is meant to accept may on some occasions be rejected and strings which the Turing machine is meant to reject may on some occasions be accepted. To accommodate this, a language L {\displaystyle L} is said to be recognized with error probability ϵ {\displaystyle \epsilon } by a probabilistic Turing machine M {\displaystyle M} if:

  1. a string w {\displaystyle w} in L {\displaystyle L} implies that Pr [ M  accepts  w ] 1 ϵ {\displaystyle {\text{Pr}}\geq 1-\epsilon }
  2. a string w {\displaystyle w} not in L {\displaystyle L} implies that Pr [ M  rejects  w ] 1 ϵ {\displaystyle {\text{Pr}}\geq 1-\epsilon }

Complexity classes

Unsolved problem in computer science: Is P = BPP ? (more unsolved problems in computer science)

As a result of the error introduced by utilizing probabilistic coin tosses, the notion of acceptance of a string by a probabilistic Turing machine can be defined in different ways. One such notion that includes several important complexity classes is allowing for an error probability of 1/3. For instance, the complexity class BPP is defined as the class of languages recognized by a probabilistic Turing machine in polynomial time with an error probability of 1/3. Another class defined using this notion of acceptance is BPL, which is the same as BPP but places the additional restriction that languages must be solvable in logarithmic space.

Complexity classes arising from other definitions of acceptance include RP, co-RP, and ZPP. If the machine is restricted to logarithmic space instead of polynomial time, the analogous RL, co-RL, and ZPL complexity classes are obtained. By enforcing both restrictions, RLP, co-RLP, BPLP, and ZPLP are yielded.

Probabilistic computation is also critical for the definition of most classes of interactive proof systems, in which the verifier machine depends on randomness to avoid being predicted and tricked by the all-powerful prover machine. For example, the class IP equals PSPACE, but if randomness is removed from the verifier, we are left with only NP, which is not known but widely believed to be a considerably smaller class.

One of the central questions of complexity theory is whether randomness adds power; that is, is there a problem that can be solved in polynomial time by a probabilistic Turing machine but not a deterministic Turing machine? Or can deterministic Turing machines efficiently simulate all probabilistic Turing machines with at most a polynomial slowdown? It is known that PBPP, since a deterministic Turing machine is just a special case of a probabilistic Turing machine. However, it is uncertain whether (but widely suspected that) BPPP, implying that BPP = P. The same question for log space instead of polynomial time (does L = BPLP?) is even more widely believed to be true. On the other hand, the power randomness gives to interactive proof systems, as well as the simple algorithms it creates for difficult problems such as polynomial-time primality testing and log-space graph connectedness testing, suggests that randomness may add power.

See also

Notes

  1. Sipser, Michael (2006). Introduction to the Theory of Computation (2nd ed.). USA: Thomson Course Technology. p. 368. ISBN 978-0-534-95097-2.
  2. Arora, Sanjeev; Barak, Boaz (2016). Computational Complexity: A Modern Approach. Cambridge University Press. p. 125. ISBN 978-0-521-42426-4.

References

External links

Categories: