Misplaced Pages

IBM alignment models

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Sequence of models in statistical machine translation

IBM alignment models are a sequence of increasingly complex models used in statistical machine translation to train a translation model and an alignment model, starting with lexical translation probabilities and moving to reordering and word duplication. They underpinned the majority of statistical machine translation systems for almost twenty years starting in the early 1990s, until neural machine translation began to dominate. These models offer principled probabilistic formulation and (mostly) tractable inference.

The original work on statistical machine translation at IBM proposed five models, and a model 6 was proposed later. The sequence of the six models can be summarized as:

  • Model 1: lexical translation
  • Model 2: additional absolute alignment model
  • Model 3: extra fertility model
  • Model 4: added relative alignment model
  • Model 5: fixed deficiency problem.
  • Model 6: Model 4 combined with a HMM alignment model in a log linear way

Mathematical setup

The IBM alignment models translation as a conditional probability model. For each source-language ("foreign") sentence f {\displaystyle f} , we generate both a target-language ("English") sentence e {\displaystyle e} and an alignment a {\displaystyle a} . The problem then is to find a good statistical model for p ( e , a | f ) {\displaystyle p(e,a|f)} , the probability that we would generate English language sentence e {\displaystyle e} and an alignment a {\displaystyle a} given a foreign sentence f {\displaystyle f} .

The meaning of an alignment grows increasingly complicated as the model version number grew. See Model 1 for the most simple and understandable version.

Model 1

Word alignment

Given any foreign-English sentence pair ( e , f ) {\displaystyle (e,f)} , an alignment for the sentence pair is a function of type { 1 , . , . . . , l e } { 0 , 1 , . , . . . , l f } {\displaystyle \{1,.,...,l_{e}\}\to \{0,1,.,...,l_{f}\}} . That is, we assume that the English word at location i {\displaystyle i} is "explained" by the foreign word at location a ( i ) {\displaystyle a(i)} . For example, consider the following pair of sentences

It will surely rain tomorrow -- 明日 は きっと 雨 だ

We can align some English words to corresponding Japanese words, but not everyone:

it -> ?

will -> ?

surely -> きっと

rain -> 雨

tomorrow -> 明日

This in general happens due to the different grammar and conventions of speech in different languages. English sentences require a subject, and when there is no subject available, it uses a dummy pronoun it. Japanese verbs do not have different forms for future and present tense, and the future tense is implied by the noun 明日 (tomorrow). Conversely, the topic-marker は and the grammar word だ (roughly "to be") do not correspond to any word in the English sentence. So, we can write the alignment as

1-> 0; 2 -> 0; 3 -> 3; 4 -> 4; 5 -> 1

where 0 means that there is no corresponding alignment.

Thus, we see that the alignment function is in general a function of type { 1 , . , . . . , l e } { 0 , 1 , . , . . . , l f } {\displaystyle \{1,.,...,l_{e}\}\to \{0,1,.,...,l_{f}\}} .

Future models will allow one English world to be aligned with multiple foreign words.

Statistical model

Given the above definition of alignment, we can define the statistical model used by Model 1:

  • Start with a "dictionary". Its entries are of form t ( e i | f j ) {\displaystyle t(e_{i}|f_{j})} , which can be interpreted as saying "the foreign word f j {\displaystyle f_{j}} is translated to the English word e i {\displaystyle e_{i}} with probability t ( e i | f j ) {\displaystyle t(e_{i}|f_{j})} ".
  • After being given a foreign sentence f {\displaystyle f} with length l f {\displaystyle l_{f}} , we first generate an English sentence length l e {\displaystyle l_{e}} uniformly in a range U n i f o r m [ 1 , 2 , . . . , N ] {\displaystyle Uniform} . In particular, it does not depend on f {\displaystyle f} or l f {\displaystyle l_{f}} .
  • Then, we generate an alignment uniformly in the set of all possible alignment functions { 1 , . , . . . , l e } { 0 , 1 , . , . . . , l f } {\displaystyle \{1,.,...,l_{e}\}\to \{0,1,.,...,l_{f}\}} .
  • Finally, for each English word e 1 , e 2 , . . . e l e {\displaystyle e_{1},e_{2},...e_{l_{e}}} , generate each one independently of every other English word. For the word e i {\displaystyle e_{i}} , generate it according to t ( e i | f a ( i ) ) {\displaystyle t(e_{i}|f_{a(i)})} .

Together, we have the probability p ( e , a | f ) = 1 / N ( 1 + l f ) l e i = 1 l e t ( e i | f a ( i ) ) {\displaystyle p(e,a|f)={\frac {1/N}{(1+l_{f})^{l_{e}}}}\prod _{i=1}^{l_{e}}t(e_{i}|f_{a(i)})} IBM Model 1 uses very simplistic assumptions on the statistical model, in order to allow the following algorithm to have closed-form solution.

Learning from a corpus

If a dictionary is not provided at the start, but we have a corpus of English-foreign language pairs { ( e ( k ) , f ( k ) ) } k {\displaystyle \{(e^{(k)},f^{(k)})\}_{k}} (without alignment information), then the model can be cast into the following form:

  • fixed parameters: the foreign sentences { f ( k ) } k {\displaystyle \{f^{(k)}\}_{k}} .
  • learnable parameters: the entries of the dictionary t ( e i | f j ) {\displaystyle t(e_{i}|f_{j})} .
  • observable variables: the English sentences { e ( k ) } k {\displaystyle \{e^{(k)}\}_{k}} .
  • latent variables: the alignments { a ( k ) } k {\displaystyle \{a^{(k)}\}_{k}}


In this form, this is exactly the kind of problem solved by expectation–maximization algorithm. Due to the simplistic assumptions, the algorithm has a closed-form, efficiently computable solution, which is the solution to the following equations: { max t k i a ( k ) t ( a ( k ) | e ( k ) , f ( k ) ) ln t ( e i ( k ) | f a ( k ) ( i ) ( k ) ) x t ( e x | f y ) = 1 y {\displaystyle {\begin{cases}\max _{t'}\sum _{k}\sum _{i}\sum _{a^{(k)}}t(a^{(k)}|e^{(k)},f^{(k)})\ln t(e_{i}^{(k)}|f_{a^{(k)}(i)}^{(k)})\\\sum _{x}t'(e_{x}|f_{y})=1\quad \forall y\end{cases}}} This can be solved by Lagrangian multipliers, then simplified. For a detailed derivation of the algorithm, see chapter 4 and.

In short, the EM algorithm goes as follows:

INPUT. a corpus of English-foreign sentence pairs { ( e ( k ) , f ( k ) ) } k {\displaystyle \{(e^{(k)},f^{(k)})\}_{k}}


INITIALIZE. matrix of translations probabilities t ( e x | f y ) {\displaystyle t(e_{x}|f_{y})} .

This could either be uniform or random. It is only required that every entry is positive, and for each y {\displaystyle y} , the probability sums to one: x t ( e x | f y ) = 1 {\displaystyle \sum _{x}t(e_{x}|f_{y})=1} .

LOOP. until t ( e x | f y ) {\displaystyle t(e_{x}|f_{y})} converges:

t ( e x | f y ) t ( e x | f y ) λ y k , i , j δ ( e x , e i ( k ) ) δ ( f y , f j ( k ) ) j t ( e i ( k ) | f j ( k ) ) {\displaystyle t(e_{x}|f_{y})\leftarrow {\frac {t(e_{x}|f_{y})}{\lambda _{y}}}\sum _{k,i,j}{\frac {\delta (e_{x},e_{i}^{(k)})\delta (f_{y},f_{j}^{(k)})}{\sum _{j'}t(e_{i}^{(k)}|f_{j'}^{(k)})}}}

where each λ y {\displaystyle \lambda _{y}} is a normalization constant that makes sure each x t ( e x | f y ) = 1 {\displaystyle \sum _{x}t(e_{x}|f_{y})=1} .

RETURN. t ( e x | f y ) {\displaystyle t(e_{x}|f_{y})} .

In the above formula, δ {\displaystyle \delta } is the Dirac delta function -- it equals 1 if the two entries are equal, and 0 otherwise. The index notation is as follows:

k {\displaystyle k} ranges over English-foreign sentence pairs in corpus;

i {\displaystyle i} ranges over words in English sentences;

j {\displaystyle j} ranges over words in foreign language sentences;

x {\displaystyle x} ranges over the entire vocabulary of English words in the corpus;

y {\displaystyle y} ranges over the entire vocabulary of foreign words in the corpus.

Limitations

There are several limitations to the IBM model 1.

  • No fluency: Given any sentence pair ( e , f ) {\displaystyle (e,f)} , any permutation of the English sentence is equally likely: p ( e | f ) = p ( e | f ) {\displaystyle p(e|f)=p(e'|f)} for any permutation of the English sentence e {\displaystyle e} into e {\displaystyle e'} .
  • No length preference: The probability of each length of translation is equal: e  has length  l p ( e | f ) = 1 N {\displaystyle \sum _{e{\text{ has length }}l}p(e|f)={\frac {1}{N}}} for any l { 1 , 2 , . . . , N } {\displaystyle l\in \{1,2,...,N\}} .
  • Does not explicitly model fertility: some foreign words tend to produce a fixed number of English words. For example, for German-to-English translation, ja is usually omitted, and zum is usually translated to one of to the, for the, to a, for a.

Model 2

Model 2 allows alignment to be conditional on sentence lengths. That is, we have a probability distribution p a ( j | i , l e , l f ) {\displaystyle p_{a}(j|i,l_{e},l_{f})} , meaning "the probability that English word i {\displaystyle i} is aligned to foreign word j {\displaystyle j} , when the English sentence is of length l e {\displaystyle l_{e}} , and the foreign sentence is of length l f {\displaystyle l_{f}} ".

The rest of Model 1 is unchanged. With that, we have p ( e , a | f ) = 1 / N i = 1 l e t ( e i | f a ( i ) ) p a ( a ( i ) | i , l e , l f ) {\displaystyle p(e,a|f)={1/N}\prod _{i=1}^{l_{e}}t(e_{i}|f_{a(i)})p_{a}(a(i)|i,l_{e},l_{f})} The EM algorithm can still be solved in closed-form, giving the following algorithm: t ( e x | f y ) 1 λ y k , i , j t ( e i ( k ) | f j ( k ) ) p a ( j | i , l e , l f ) δ ( e x , e i ( k ) ) δ ( f y , f j ( k ) ) j t ( e i ( k ) | f j ( k ) ) p a ( j | i , l e , l f ) {\displaystyle t(e_{x}|f_{y})\leftarrow {\frac {1}{\lambda _{y}}}\sum _{k,i,j}{\frac {t(e_{i}^{(k)}|f_{j}^{(k)})p_{a}(j|i,l_{e},l_{f})\delta (e_{x},e_{i}^{(k)})\delta (f_{y},f_{j}^{(k)})}{\sum _{j'}t(e_{i}^{(k)}|f_{j'}^{(k)})p_{a}(j'|i,l_{e},l_{f})}}} p a ( j | i , l e , l f ) 1 λ i , l e , l f k t ( e i ( k ) | f j ( k ) ) p a ( j | i , l e , l f ) δ ( e x , e i ( k ) ) δ ( f y , f j ( k ) ) δ ( l e , l e ( k ) ) δ ( l f , l f ( k ) ) j t ( e i ( k ) | f j ( k ) ) p a ( j | i , l e , l f ) {\displaystyle p_{a}(j|i,l_{e},l_{f})\leftarrow {\frac {1}{\lambda _{i,l_{e},l_{f}}}}\sum _{k}{\frac {t(e_{i}^{(k)}|f_{j}^{(k)})p_{a}(j|i,l_{e},l_{f})\delta (e_{x},e_{i}^{(k)})\delta (f_{y},f_{j}^{(k)})\delta (l_{e},l_{e}^{(k)})\delta (l_{f},l_{f}^{(k)})}{\sum _{j'}t(e_{i}^{(k)}|f_{j'}^{(k)})p_{a}(j'|i,l_{e},l_{f})}}} where λ {\displaystyle \lambda } are still normalization factors. See section 4.4.1 of for a derivation and an algorithm.

Model 3

The fertility problem is addressed in IBM Model 3. The fertility is modeled using probability distribution defined as:

n ( ϕ f ) {\displaystyle n(\phi \lor f)}

For each foreign word j {\displaystyle j} , such distribution indicates to how many output words ϕ {\displaystyle \phi } it usually translates. This model deals with dropping input words because it allows ϕ = 0 {\displaystyle \phi =0} . But there is still an issue when adding words. For example, the English word do is often inserted when negating. This issue generates a special NULL token that can also have its fertility modeled using a conditional distribution defined as:

n ( N U L L ) {\displaystyle n(\varnothing \lor NULL)}

The number of inserted words depends on sentence length. This is why the NULL token insertion is modeled as an additional step: the fertility step. It increases the IBM Model 3 translation process to four steps:

The last step is called distortion instead of alignment because it is possible to produce the same translation with the same alignment in different ways. For example, in the above example, we have another way to get the same alignment:

  • ja NULL nie pôjde tak do do domu
  • I do not go the to house
  • I do not go to the house

IBM Model 3 can be mathematically expressed as:

P ( S E , A ) = i = 1 I Φ i ! n ( Φ e j ) j = 1 J t ( f j e a j ) j : a ( j ) 0 J d ( j | a j , I , J ) ( J Φ 0 Φ 0 ) p 0 Φ 0 p 1 J {\displaystyle P(S\mid E,A)=\prod _{i=1}^{I}\Phi _{i}!n(\Phi \mid e_{j})*\prod _{j=1}^{J}t(f_{j}\mid e_{a_{j}})*\prod _{j:a(j)\neq 0}^{J}d(j|a_{j},I,J){\binom {J-\Phi _{0}}{\Phi _{0}}}p_{0}^{\Phi _{0}}p_{1}^{J}}

where Φ i {\displaystyle \Phi _{i}} represents the fertility of e i {\displaystyle e_{i}} , each source word s {\displaystyle s} is assigned a fertility distribution n {\displaystyle n} , and I {\displaystyle I} and J {\displaystyle J} refer to the absolute lengths of the target and source sentences, respectively.

See section 4.4.2 of for a derivation and an algorithm.

Model 4

In IBM Model 4, each word is dependent on the previously aligned word and on the word classes of the surrounding words. Some words tend to get reordered during translation more than others (e.g. adjective–noun inversion when translating Polish to English). Adjectives often get moved before the noun that precedes them. The word classes introduced in Model 4 solve this problem by conditioning the probability distributions of these classes. The result of such distribution is a lexicalized model. Such a distribution can be defined as follows:

For the initial word in the cept: d 1 ( j [ i 1 ] A ( f [ i 1 ] ) , B ( e j ) ) {\displaystyle d_{1}(j-\odot _{}\lor A(f_{}),B(e_{j}))}

For additional words: d 1 ( j π i , k 1 B ( e j ) ) {\displaystyle d_{1}(j-\pi _{i,k-1}\lor B(e_{j}))}

where A ( f ) {\displaystyle A(f)} and B ( e ) {\displaystyle B(e)} functions map words to their word classes, and e j {\displaystyle e_{j}} and f [ i 1 ] {\displaystyle f_{}} are distortion probability distributions of the words. The cept is formed by aligning each input word f i {\displaystyle f_{i}} to at least one output word.

Both Model 3 and Model 4 ignore if an input position was chosen and if the probability mass was reserved for the input positions outside the sentence boundaries. It is the reason for the probabilities of all correct alignments not sum up to unity in these two models (deficient models).

Model 5

IBM Model 5 reformulates IBM Model 4 by enhancing the alignment model with more training parameters in order to overcome the model deficiency. During the translation in Model 3 and Model 4 there are no heuristics that would prohibit the placement of an output word in a position already taken. In Model 5 it is important to place words only in free positions. It is done by tracking the number of free positions and allowing placement only in such positions. The distortion model is similar to IBM Model 4, but it is based on free positions. If v j {\displaystyle v_{j}} denotes the number of free positions in the output, the IBM Model 5 distortion probabilities would be defined as:

For the initial word in the cept: d 1 ( v j B ( e j ) , v i 1 , v m a x ) {\displaystyle d_{1}(v_{j}\lor B(e_{j}),v_{\odot i-1},v_{max})}

For additional words: d 1 ( v j v π i , k 1 B ( e j ) , v m a x ) {\displaystyle d_{1}(v_{j}-v_{\pi _{i,k-1}}\lor B(e_{j}),v_{max'})}

The alignment models that use first-order dependencies like the HMM or IBM Models 4 and 5 produce better results than the other alignment methods. The main idea of HMM is to predict the distance between subsequent source language positions. On the other hand, IBM Model 4 tries to predict the distance between subsequent target language positions. Since it was expected to achieve better alignment quality when using both types of such dependencies, HMM and Model 4 were combined in a log-linear manner in Model 6 as follows:

p 6 ( f , a e ) = p 4 ( f , a e ) α p H M M ( f , a e ) a , f p 4 ( f , a e ) α p H M M ( f , a e ) {\displaystyle p_{6}(f,a\lor e)={\frac {p_{4}(f,a\lor e)^{\alpha }*p_{HMM}(f,a\lor e)}{\sum _{a',f'}p_{4}(f',a'\lor e)^{\alpha }*p_{HMM}(f',a'\lor e)}}}

where the interpolation parameter α {\displaystyle \alpha } is used in order to count the weight of Model 4 relatively to the hidden Markov model. A log-linear combination of several models can be defined as p k ( f , a e ) {\displaystyle p_{k}(f,a\mid e)} with k = 1 , 2 , , K {\displaystyle k=1,2,\dotsc ,K} as:

p 6 ( f , a e ) = k = 1 K p k ( f , a e ) α k a , f k = 1 K p k ( f , a e ) α k {\displaystyle p_{6}(f,a\lor e)={\frac {\prod _{k=1}^{K}p_{k}(f,a\lor e)^{\alpha _{k}}}{\sum _{a',f'}\prod _{k=1}^{K}p_{k}(f',a'\mid e)^{\alpha _{k}}}}}

The log-linear combination is used instead of linear combination because the P r ( f , a e ) {\displaystyle P_{r}(f,a\mid e)} values are typically different in terms of their orders of magnitude for HMM and IBM Model 4.

References

  1. "IBM Models". SMT Research Survey Wiki. 11 September 2015. Retrieved 26 October 2015.
  2. Yarin Gal; Phil Blunsom (12 June 2013). "A Systematic Bayesian Treatment of the IBM Alignment Models" (PDF). University of Cambridge. Archived from the original (PDF) on 4 Mar 2016. Retrieved 26 October 2015.
  3. ^ Koehn, Philipp (2010). "4. Word-Based Models". Statistical Machine Translation. Cambridge University Press. ISBN 978-0-521-87415-1.
  4. "CS288, Spring 2020, Lectur 05: Statistical Machine Translation" (PDF). Archived (PDF) from the original on 24 Oct 2020.
  5. Wołk K., Marasek K. (2014). Polish-English Speech Statistical Machine Translation Systems for the IWSLT 2014. Proceedings of the 11th International Workshop on Spoken Language Translation, Lake Tahoe, USA. arXiv:1509.08874.
  6. FERNÁNDEZ, Pablo Malvar. Improving Word-to-word Alignments Using Morphological Information. 2008. PhD Thesis. San Diego State University.
  7. ^ Schoenemann, Thomas (2010). Computing optimal alignments for the IBM-3 translation model. Proceedings of the Fourteenth Conference on Computational Natural Language Learning. Association for Computational Linguistics. pp. 98–106.
  8. KNIGHT, Kevin. A statistical MT tutorial workbook. Manuscript prepared for the 1999 JHU Summer Workshop, 1999.
  9. Brown, Peter F. (1993). "The mathematics of statistical machine translation: Parameter estimation". Computational Linguistics (19): 263–311.
  10. Vulić I. (2010). "Term Alignment. State of the Art Overview" (PDF). Katholieke Universiteit Leuven. Retrieved 26 October 2015.
  11. Wołk, K. (2015). "Noisy-Parallel and Comparable Corpora Filtering Methodology for the Extraction of Bi-Lingual Equivalent Data at Sentence Level". Computer Science. 16 (2): 169–184. arXiv:1510.04500. Bibcode:2015arXiv151004500W. doi:10.7494/csci.2015.16.2.169. S2CID 12860633.
Categories: