Misplaced Pages

Finite-state transducer

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Finite state transducer) Finite state machine with two tapes (input, output)
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
This article includes a list of general references, but it lacks sufficient corresponding inline citations. Please help to improve this article by introducing more precise citations. (August 2014) (Learn how and when to remove this message)
This article has an unclear citation style. The references used may be made clearer with a different or consistent style of citation and footnoting. (August 2014) (Learn how and when to remove this message)
(Learn how and when to remove this message)

A finite-state transducer (FST) is a finite-state machine with two memory tapes, following the terminology for Turing machines: an input tape and an output tape. This contrasts with an ordinary finite-state automaton, which has a single tape. An FST is a type of finite-state automaton (FSA) that maps between two sets of symbols. An FST is more general than an FSA. An FSA defines a formal language by defining a set of accepted strings, while an FST defines a relation between sets of strings.

An FST will read a set of strings on the input tape and generates a set of relations on the output tape. An FST can be thought of as a translator or relater between strings in a set.

In morphological parsing, an example would be inputting a string of letters into the FST, the FST would then output a string of morphemes.

Overview

External videos
video icon Finite State Transducers // Karlsruhe Institute of Technology, YouTube video

An automaton can be said to recognize a string if we view the content of its tape as input. In other words, the automaton computes a function that maps strings into the set {0,1}. Alternatively, we can say that an automaton generates strings, which means viewing its tape as an output tape. On this view, the automaton generates a formal language, which is a set of strings. The two views of automata are equivalent: the function that the automaton computes is precisely the indicator function of the set of strings it generates. The class of languages generated by finite automata is known as the class of regular languages.

The two tapes of a transducer are typically viewed as an input tape and an output tape. On this view, a transducer is said to transduce (i.e., translate) the contents of its input tape to its output tape, by accepting a string on its input tape and generating another string on its output tape. It may do so nondeterministically and it may produce more than one output for each input string. A transducer may also produce no output for a given input string, in which case it is said to reject the input. In general, a transducer computes a relation between two formal languages.

Each string-to-string finite-state transducer relates the input alphabet Σ to the output alphabet Γ. Relations R on Σ*×Γ* that can be implemented as finite-state transducers are called rational relations. Rational relations that are partial functions, i.e. that relate every input string from Σ* to at most one Γ*, are called rational functions.

Finite-state transducers are often used for phonological and morphological analysis in natural language processing research and applications. Pioneers in this field include Ronald Kaplan, Lauri Karttunen, Martin Kay and Kimmo Koskenniemi. A common way of using transducers is in a so-called "cascade", where transducers for various operations are combined into a single transducer by repeated application of the composition operator (defined below).

Formal construction

Formally, a finite transducer T is a 6-tuple (Q, Σ, Γ, I, F, δ) such that:

  • Q is a finite set, the set of states;
  • Σ is a finite set, called the input alphabet;
  • Γ is a finite set, called the output alphabet;
  • I is a subset of Q, the set of initial states;
  • F is a subset of Q, the set of final states; and
  • δ Q × ( Σ { ϵ } ) × ( Γ { ϵ } ) × Q {\displaystyle \delta \subseteq Q\times (\Sigma \cup \{\epsilon \})\times (\Gamma \cup \{\epsilon \})\times Q} (where ε is the empty string) is the transition relation.

We can view (Q, δ) as a labeled directed graph, known as the transition graph of T: the set of vertices is Q, and ( q , a , b , r ) δ {\displaystyle (q,a,b,r)\in \delta } means that there is a labeled edge going from vertex q to vertex r. We also say that a is the input label and b the output label of that edge.

NOTE: This definition of finite transducer is also called letter transducer (Roche and Schabes 1997); alternative definitions are possible, but can all be converted into transducers following this one.

Define the extended transition relation δ {\displaystyle \delta ^{*}} as the smallest set such that:

  • δ δ {\displaystyle \delta \subseteq \delta ^{*}} ;
  • ( q , ϵ , ϵ , q ) δ {\displaystyle (q,\epsilon ,\epsilon ,q)\in \delta ^{*}} for all q Q {\displaystyle q\in Q} ; and
  • whenever ( q , x , y , r ) δ {\displaystyle (q,x,y,r)\in \delta ^{*}} and ( r , a , b , s ) δ {\displaystyle (r,a,b,s)\in \delta } then ( q , x a , y b , s ) δ {\displaystyle (q,xa,yb,s)\in \delta ^{*}} .

The extended transition relation is essentially the reflexive transitive closure of the transition graph that has been augmented to take edge labels into account. The elements of δ {\displaystyle \delta ^{*}} are known as paths. The edge labels of a path are obtained by concatenating the edge labels of its constituent transitions in order.

The behavior of the transducer T is the rational relation defined as follows: x [ T ] y {\displaystyle xy} if and only if there exists i I {\displaystyle i\in I} and f F {\displaystyle f\in F} such that ( i , x , y , f ) δ {\displaystyle (i,x,y,f)\in \delta ^{*}} . This is to say that T transduces a string x Σ {\displaystyle x\in \Sigma ^{*}} into a string y Γ {\displaystyle y\in \Gamma ^{*}} if there exists a path from an initial state to a final state whose input label is x and whose output label is y.

Weighted automata

See also: Rational series

Finite State Transducers can be weighted, where each transition is labelled with a weight in addition to the input and output labels. A Weighted Finite State Transducer (WFST) over a set K of weights can be defined similarly to an unweighted one as an 8-tuple T=(Q, Σ, Γ, I, F, E, λ, ρ), where:

  • Q, Σ, Γ, I, F are defined as above;
  • E Q × ( Σ { ϵ } ) × ( Γ { ϵ } ) × Q × K {\displaystyle E\subseteq Q\times (\Sigma \cup \{\epsilon \})\times (\Gamma \cup \{\epsilon \})\times Q\times K} (where ε is the empty string) is the finite set of transitions;
  • λ : I K {\displaystyle \lambda :I\rightarrow K} maps initial states to weights;
  • ρ : F K {\displaystyle \rho :F\rightarrow K} maps final states to weights.

In order to make certain operations on WFSTs well-defined, it is convenient to require the set of weights to form a semiring. Two typical semirings used in practice are the log semiring and tropical semiring: nondeterministic automata may be regarded as having weights in the Boolean semiring.

Stochastic FST

Stochastic FSTs (also known as probabilistic FSTs or statistical FSTs) are presumably a form of weighted FST.

Operations on finite-state transducers

The following operations defined on finite automata also apply to finite transducers:

  • Union. Given transducers T and S, there exists a transducer T S {\displaystyle T\cup S} such that x [ T S ] y {\displaystyle xy} if and only if x [ T ] y {\displaystyle xy} or x [ S ] y {\displaystyle xy} .
  • Concatenation. Given transducers T and S, there exists a transducer T S {\displaystyle T\cdot S} such that x [ T S ] y {\displaystyle xy} if and only if there exist x 1 , x 2 , y 1 , y 2 {\displaystyle x_{1},x_{2},y_{1},y_{2}} with x = x 1 x 2 , y = y 1 y 2 , x 1 [ T ] y 1 {\displaystyle x=x_{1}x_{2},y=y_{1}y_{2},x_{1}y_{1}} and x 2 [ S ] y 2 . {\displaystyle x_{2}y_{2}.}
  • Kleene closure. Given a transducer T, there might exist a transducer T {\displaystyle T^{*}} with the following properties:
ϵ [ T ] ϵ {\displaystyle \epsilon \epsilon } ; (k1)
if w [ T ] y {\displaystyle wy} and x [ T ] z {\displaystyle xz} , then w x [ T ] y z {\displaystyle wxyz} ; (k2)
and x [ T ] y {\displaystyle xy} does not hold unless mandated by (k1) or (k2).
  • Composition. Given a transducer T on alphabets Σ and Γ and a transducer S on alphabets Γ and Δ, there exists a transducer T S {\displaystyle T\circ S} on Σ and Δ such that x [ T S ] z {\displaystyle xz} if and only if there exists a string y Γ {\displaystyle y\in \Gamma ^{*}} such that x [ T ] y {\displaystyle xy} and y [ S ] z {\displaystyle yz} . This operation extends to the weighted case.
This definition uses the same notation used in mathematics for relation composition. However, the conventional reading for relation composition is the other way around: given two relations T and S, ( x , z ) T S {\displaystyle (x,z)\in T\circ S} when there exist some y such that ( x , y ) S {\displaystyle (x,y)\in S} and ( y , z ) T . {\displaystyle (y,z)\in T.}
  • Projection to an automaton. There are two projection functions: π 1 {\displaystyle \pi _{1}} preserves the input tape, and π 2 {\displaystyle \pi _{2}} preserves the output tape. The first projection, π 1 {\displaystyle \pi _{1}} is defined as follows:
Given a transducer T, there exists a finite automaton π 1 T {\displaystyle \pi _{1}T} such that π 1 T {\displaystyle \pi _{1}T} accepts x if and only if there exists a string y for which x [ T ] y . {\displaystyle xy.}
The second projection, π 2 {\displaystyle \pi _{2}} is defined similarly.
  • Determinization. Given a transducer T, we want to build an equivalent transducer that has a unique initial state and such that no two transitions leaving any state share the same input label. The powerset construction can be extended to transducers, or even weighted transducers, but sometimes fails to halt; indeed, some non-deterministic transducers do not admit equivalent deterministic transducers. Characterizations of determinizable transducers have been proposed along with efficient algorithms to test them: they rely on the semiring used in the weighted case as well as a general property on the structure of the transducer (the twins property).
  • Weight pushing for the weighted case.
  • Minimization for the weighted case.
  • Removal of epsilon-transitions.

Additional properties of finite-state transducers

  • It is decidable whether the relation of a transducer T is empty.
  • It is decidable whether there exists a string y such that xy for a given string x.
  • It is undecidable whether two transducers are equivalent. Equivalence is however decidable in the special case where the relation of a transducer T is a (partial) function.
  • If one defines the alphabet of labels L = ( Σ { ϵ } ) × ( Γ { ϵ } ) {\displaystyle L=(\Sigma \cup \{\epsilon \})\times (\Gamma \cup \{\epsilon \})} , finite-state transducers are isomorphic to NDFA over the alphabet L {\displaystyle L} , and may therefore be determinized (turned into deterministic finite automata over the alphabet L = [ ( Σ { ϵ } ) × Γ ] [ Σ × ( Γ { ϵ } ) ] {\displaystyle L=\cup } ) and subsequently minimized so that they have the minimum number of states.

Applications

FSTs are used in the lexical analysis phase of compilers to associate semantic value with the discovered tokens.

Context-sensitive rewriting rules of the form ab / c _ d, used in linguistics to model phonological rules and sound change, are computationally equivalent to finite-state transducers, provided that application is nonrecursive, i.e. the rule is not allowed to rewrite the same substring twice.

Weighted FSTs found applications in natural language processing, including machine translation, and in machine learning. An implementation for part-of-speech tagging can be found as one component of the OpenGrm library.

See also

Notes

  1. Jurafsky, Daniel (2009). Speech and Language Processing. Pearson. ISBN 9789332518414.
  2. Koskenniemi 1983
  3. Berstel, Jean; Reutenauer, Christophe (2011). Noncommutative rational series with applications. Encyclopedia of Mathematics and Its Applications. Vol. 137. Cambridge: Cambridge University Press. p. 16. ISBN 978-0-521-19022-0. Zbl 1250.68007.
  4. Lothaire, M. (2005). Applied combinatorics on words. Encyclopedia of Mathematics and Its Applications. Vol. 105. A collective work by Jean Berstel, Dominique Perrin, Maxime Crochemore, Eric Laporte, Mehryar Mohri, Nadia Pisanti, Marie-France Sagot, Gesine Reinert, Sophie Schbath, Michael Waterman, Philippe Jacquet, Wojciech Szpankowski, Dominique Poulalhon, Gilles Schaeffer, Roman Kolpakov, Gregory Koucherov, Jean-Paul Allouche and Valérie Berthé. Cambridge: Cambridge University Press. p. 211. ISBN 0-521-84802-4. Zbl 1133.68067.
  5. Boigelot, Bernard; Legay, Axel; Wolper, Pierre (2003). "Iterating Transducers in the Large". Computer Aided Verification. Lecture Notes in Computer Science. Vol. 2725. Springer Berlin Heidelberg. pp. 223–235. doi:10.1007/978-3-540-45069-6_24. eISSN 1611-3349. ISBN 978-3-540-40524-5. ISSN 0302-9743.
  6. Mohri 2004, pp. 3–5
  7. "Determinization of Transducers".
  8. Mohri 2004, pp. 5–6
  9. Allauzen & Mohri 2003
  10. Mohri 2004, pp. 7–9
  11. Mohri 2004, pp. 9–11
  12. Griffiths 1968
  13. Charles N. Fischer; Ron K. Cytron; Richard J. LeBlanc, Jr. (2010). "Scanning - Theory and Practice". Crafting a Compiler. Addison-Wesley. ISBN 978-0-13-606705-4.
  14. "Regular Models of Phonological Rule Systems" (PDF). Archived from the original (PDF) on October 11, 2010. Retrieved August 25, 2012.
  15. Kevin Knight; Jonathan May (2009). "Applications of Weighted Automata in Natural Language Processing". In Manfred Droste; Werner Kuich; Heiko Vogler (eds.). Handbook of Weighted Automata. Springer Science & Business Media. ISBN 978-3-642-01492-5.
  16. "Learning with Weighted Transducers" (PDF). Retrieved April 29, 2017.
  17. OpenGrm

References

External links

Further reading

Automata theory: formal languages and formal grammars
Chomsky hierarchyGrammarsLanguagesAbstract machines
  • Type-0
  • Type-1
  • Type-2
  • Type-3
Each category of languages, except those marked by a , is a proper subset of the category directly above it. Any language in each category is generated by a grammar and by an automaton in the category in the same line.
Category: