Misplaced Pages

Latent semantic analysis

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

This is an old revision of this page, as edited by 2a01:e0a:545:e1a0:88b3:5313:9f75:a6f3 (talk) at 09:48, 4 November 2020 (Modified linik to Hofmann paper, as the previous one led to a 404 error). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Revision as of 09:48, 4 November 2020 by 2a01:e0a:545:e1a0:88b3:5313:9f75:a6f3 (talk) (Modified linik to Hofmann paper, as the previous one led to a 404 error)(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

Latent semantic analysis (LSA) is a technique in natural language processing, in particular in vectorial semantics, of analyzing relationships between a set of documents and the terms they contain by producing a set of concepts related to the documents and terms.

LSA was patented in 1988 by Scott Deerwester, Susan Dumais, George Furnas, Richard Harshman, Thomas Landauer, Karen Lochbaum and Lynn Streeter. In the context of its application to information retrieval, it is sometimes called latent semantic indexing (LSI).

Occurrence matrix

LSA uses a term-document matrix which describes the occurrences of terms in documents; it is a sparse matrix whose rows correspond to terms and whose columns correspond to documents , typically stemmed words that appear in the documents. A typical example of the weighting of the elements of the matrix is tf-idf (term frequency–inverse document frequency): the element of the matrix is proportional to the number of times the terms appear in each document, where rare terms are upweighted to reflect their relative importance.

This matrix is also common to standard semantic models, though it is not necessarily explicitly expressed as a matrix, since the mathematical properties of matrix are not always used.

LSA transforms the occurrence matrix into a relation between the terms and some concepts, and a relation between those concepts and the documents. Thus the terms and documents are now indirectly related through the concepts.

Applications

The new concept space typically can be used to:

Synonymy and polysemy are fundamental problems in natural language processing:

  • Synonymy is the phenomenon where different words describe the same idea. Thus, a query in a search engine may fail to retrieve a relevant document that does not contain the words which appeared in the query.
  • Polysemy is the phenomenon where the same word has multiple meanings. So a search may retrieve irrelevant documents containing the desired words in the wrong meaning. For example, a botanist and a computer scientist looking for the word "tree" probably desire different sets of documents.

Rank lowering

After the construction of the occurrence matrix, LSA finds a low-rank approximation to the term-document matrix. There could be various reasons for these approximations:

  • The original term-document matrix is presumed too large for the computing resources; in this case, the approximated low rank matrix is interpreted as an approximation (a "least and necessary evil").
  • The original term-document matrix is presumed noisy: for example, anecdotal instances of terms are to be eliminated. From this point of view, the approximated matrix is interpreted as a de-noisified matrix (a better matrix than the original).
  • The original term-document matrix is presumed overly sparse relative to the "true" term-document matrix. That is, the original matrix lists only the words actually in each document, whereas we might be interested in all words related to each document--generally a much larger set due to synonymy.

The consequence of the rank lowering is that some dimensions are combined and depend on more than one term:

{(car), (truck), (flower)} --> {(1.3452 * car + 0.2828 * truck), (flower)}

This mitigates synonymy, as the rank lowering is expected to merge the dimensions associated with terms that have similar meanings. It also mitigates polysemy, since components of polysemous words that point in the "right" direction are added to the components of words that share a similar meaning. Conversely, components that point in other directions tend to either simply cancel out, or, at worst, to be smaller than components in the directions corresponding to the intended sense.

Derivation

Let X {\displaystyle X} be a matrix where element ( i , j ) {\displaystyle (i,j)} describes the occurrence of term i {\displaystyle i} in document j {\displaystyle j} (this can be for example the frequency). X {\displaystyle X} will look like this:

d j t i T [ x 1 , 1 x 1 , n x m , 1 x m , n ] {\displaystyle {\begin{matrix}&{\textbf {d}}_{j}\\&\downarrow \\{\textbf {t}}_{i}^{T}\rightarrow &{\begin{bmatrix}x_{1,1}&\dots &x_{1,n}\\\vdots &\ddots &\vdots \\x_{m,1}&\dots &x_{m,n}\\\end{bmatrix}}\end{matrix}}}

Now a row in this matrix will be a vector corresponding to a term, giving its relation to each document:

t i T = [ x i , 1 x i , n ] {\displaystyle {\textbf {t}}_{i}^{T}={\begin{bmatrix}x_{i,1}&\dots &x_{i,n}\end{bmatrix}}}

Likewise, a column in this matrix will be a vector corresponding to a document, giving its relation to each term:

d j = [ x 1 , j x m , j ] {\displaystyle {\textbf {d}}_{j}={\begin{bmatrix}x_{1,j}\\\vdots \\x_{m,j}\end{bmatrix}}}

Now the dot product t i T t p {\displaystyle {\textbf {t}}_{i}^{T}{\textbf {t}}_{p}} between two term vectors gives the correlation between the terms over the documents. The matrix product X X T {\displaystyle XX^{T}} contains all these dot products. Element ( i , p ) {\displaystyle (i,p)} (which is equal to element ( p , i ) {\displaystyle (p,i)} ) contains the dot product t i T t p {\displaystyle {\textbf {t}}_{i}^{T}{\textbf {t}}_{p}} ( = t p T t i {\displaystyle ={\textbf {t}}_{p}^{T}{\textbf {t}}_{i}} ). Likewise, the matrix X T X {\displaystyle X^{T}X} contains the dot products between all the document vectors, giving their correlation over the terms: d j T d q = d q T d j {\displaystyle {\textbf {d}}_{j}^{T}{\textbf {d}}_{q}={\textbf {d}}_{q}^{T}{\textbf {d}}_{j}} .

Now assume that there exists a decomposition of X {\displaystyle X} such that U {\displaystyle U} and V {\displaystyle V} are orthonormal matrices and Σ {\displaystyle \Sigma } is a diagonal matrix. This is called a singular value decomposition(SVD):

X = U Σ V T {\displaystyle X=U\Sigma V^{T}}

The matrix products giving us the term and document correlations then become

X X T = ( U Σ V T ) ( U Σ V T ) T = ( U Σ V T ) ( V T T Σ T U T ) = U Σ V T V Σ T U T = U Σ Σ T U T X T X = ( U Σ V T ) T ( U Σ V T ) = ( V T T Σ T U T ) ( U Σ V T ) = V Σ U T U Σ V T = V Σ T Σ V T {\displaystyle {\begin{matrix}XX^{T}&=&(U\Sigma V^{T})(U\Sigma V^{T})^{T}=(U\Sigma V^{T})(V^{T^{T}}\Sigma ^{T}U^{T})=U\Sigma V^{T}V\Sigma ^{T}U^{T}=U\Sigma \Sigma ^{T}U^{T}\\X^{T}X&=&(U\Sigma V^{T})^{T}(U\Sigma V^{T})=(V^{T^{T}}\Sigma ^{T}U^{T})(U\Sigma V^{T})=V\Sigma U^{T}U\Sigma V^{T}=V\Sigma ^{T}\Sigma V^{T}\end{matrix}}}

Since Σ Σ T {\displaystyle \Sigma \Sigma ^{T}} and Σ T Σ {\displaystyle \Sigma ^{T}\Sigma } are diagonal we see that U {\displaystyle U} must contain the eigenvectors of X X T {\displaystyle XX^{T}} , while V {\displaystyle V} must be the eigenvectors of X T X {\displaystyle X^{T}X} . Both products have the same non-zero eigenvalues, given by the non-zero entries of Σ Σ T {\displaystyle \Sigma \Sigma ^{T}} , or equally, by the non-zero entries of Σ T Σ {\displaystyle \Sigma ^{T}\Sigma } . Now the decomposition looks like this:

X U Σ V T ( d j ) ( d ^ j ) ( t i T ) [ x 1 , 1 x 1 , n x m , 1 x m , n ] = ( t ^ i T ) [ [ u 1 ] [ u l ] ] [ σ 1 0 0 σ l ] [ [ v 1 ] [ v l ] ] {\displaystyle {\begin{matrix}&X&&&U&&\Sigma &&V^{T}\\&({\textbf {d}}_{j})&&&&&&&({\hat {\textbf {d}}}_{j})\\&\downarrow &&&&&&&\downarrow \\({\textbf {t}}_{i}^{T})\rightarrow &{\begin{bmatrix}x_{1,1}&\dots &x_{1,n}\\\\\vdots &\ddots &\vdots \\\\x_{m,1}&\dots &x_{m,n}\\\end{bmatrix}}&=&({\hat {\textbf {t}}}_{i}^{T})\rightarrow &{\begin{bmatrix}{\begin{bmatrix}\,\\\,\\{\textbf {u}}_{1}\\\,\\\,\end{bmatrix}}\dots {\begin{bmatrix}\,\\\,\\{\textbf {u}}_{l}\\\,\\\,\end{bmatrix}}\end{bmatrix}}&\cdot &{\begin{bmatrix}\sigma _{1}&\dots &0\\\vdots &\ddots &\vdots \\0&\dots &\sigma _{l}\\\end{bmatrix}}&\cdot &{\begin{bmatrix}{\begin{bmatrix}&&{\textbf {v}}_{1}&&\end{bmatrix}}\\\vdots \\{\begin{bmatrix}&&{\textbf {v}}_{l}&&\end{bmatrix}}\end{bmatrix}}\end{matrix}}}

The values σ 1 , , σ l {\displaystyle \sigma _{1},\dots ,\sigma _{l}} are called the singular values, and u 1 , , u l {\displaystyle u_{1},\dots ,u_{l}} and v 1 , , v l {\displaystyle v_{1},\dots ,v_{l}} the left and right singular vectors. Notice how the only part of U {\displaystyle U} that contributes to t i {\displaystyle {\textbf {t}}_{i}} is the i 'th {\displaystyle i{\textrm {'th}}} row. Let this row vector be called t ^ i {\displaystyle {\hat {\textrm {t}}}_{i}} . Likewise, the only part of V T {\displaystyle V^{T}} that contributes to d i {\displaystyle {\textbf {d}}_{i}} is the j 'th {\displaystyle j{\textrm {'th}}} column, d ^ j {\displaystyle {\hat {\textrm {d}}}_{j}} . These are not the eigenvectors, but depend on all the eigenvectors.

It turns out that when you select the k {\displaystyle k} largest singular values, and their corresponding singular vectors from U {\displaystyle U} and V {\displaystyle V} , you get the rank k {\displaystyle k} approximation to X with the smallest error (Frobenius norm). The amazing thing about this approximation, is that not only does it have a minimal error, but it translates the term and document vectors into a concept space. The vector t ^ i {\displaystyle {\hat {\textbf {t}}}_{i}} then has k {\displaystyle k} entries, each giving the occurrence of term i {\displaystyle i} in one of the k {\displaystyle k} concepts. Likewise, the vector d ^ j {\displaystyle {\hat {\textbf {d}}}_{j}} gives the relation between document j {\displaystyle j} and each concept. We write this approximation as

X k = U k Σ k V k T {\displaystyle X_{k}=U_{k}\Sigma _{k}V_{k}^{T}}

You can now do the following:

  • See how related documents j {\displaystyle j} and q {\displaystyle q} are in the concept space by comparing the vectors d ^ j {\displaystyle {\hat {\textbf {d}}}_{j}} and d ^ q {\displaystyle {\hat {\textbf {d}}}_{q}} (typically by cosine similarity). This gives you a clustering of the documents.
  • Comparing terms i {\displaystyle i} and p {\displaystyle p} by comparing the vectors t ^ j {\displaystyle {\hat {\textbf {t}}}_{j}} and t ^ p {\displaystyle {\hat {\textbf {t}}}_{p}} , giving you a clustering of the terms in the concept space.
  • Given a query, view this as a mini document, and compare it to your documents in the concept space.

To do the latter, you must first translate your query into the concept space. It is then intuitive that you must use the same transformation that you use on your documents:

d j = U k Σ k d ^ j {\displaystyle {\textbf {d}}_{j}=U_{k}\Sigma _{k}{\hat {\textbf {d}}}_{j}}

d ^ j = Σ k 1 U k T d j {\displaystyle {\hat {\textbf {d}}}_{j}=\Sigma _{k}^{-1}U_{k}^{T}{\textbf {d}}_{j}}

This means that if you have a query vector q {\displaystyle q} , you must do the translation q ^ = Σ k 1 U k T q {\displaystyle {\hat {\textbf {q}}}=\Sigma _{k}^{-1}U_{k}^{T}{\textbf {q}}} before you compare it with the document vectors in the concept space. You can do the same for pseudo term vectors:

t i T = t ^ i T Σ k V k T {\displaystyle {\textbf {t}}_{i}^{T}={\hat {\textbf {t}}}_{i}^{T}\Sigma _{k}V_{k}^{T}}

t ^ i T = t i T V k T Σ k 1 = t i T V k Σ k 1 {\displaystyle {\hat {\textbf {t}}}_{i}^{T}={\textbf {t}}_{i}^{T}V_{k}^{-T}\Sigma _{k}^{-1}={\textbf {t}}_{i}^{T}V_{k}\Sigma _{k}^{-1}}

t ^ i = Σ k 1 V k T t i {\displaystyle {\hat {\textbf {t}}}_{i}=\Sigma _{k}^{-1}V_{k}^{T}{\textbf {t}}_{i}}

Implementation

The SVD is typically computed using large matrix methods (for example, Lanczos methods) but may also be computed incrementally and with greatly reduced resources via a neural network-like approach which does not require the large, full-rank matrix to be held in memory .

Limitations

LSA has two drawbacks:

  • The resulting dimensions might be difficult to interpret. For instance, in
{(car), (truck), (flower)} --> {(1.3452 * car + 0.2828 * truck), (flower)}
the (1.3452 * car + 0.2828 * truck) component could be interpreted as "vehicle". However, it is very likely that cases close to
{(car), (bottle), (flower)} --> {(1.3452 * car + 0.2828 * bottle), (flower)}
will occur. This leads to results which can be justified on the mathematical level, but have no interpretable meaning in natural language.

See also

External links

References

Categories: