Misplaced Pages

Vector space model

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
(Redirected from Vectorial semantics) Model for representing text documents

Vector space model or term vector model is an algebraic model for representing text documents (or more generally, items) as vectors such that the distance between vectors represents the relevance between the documents. It is used in information filtering, information retrieval, indexing and relevancy rankings. Its first use was in the SMART Information Retrieval System.

Definitions

In this section we consider a particular vector space model based on the bag-of-words representation. Documents and queries are represented as vectors.

d j = ( w 1 , j , w 2 , j , , w n , j ) {\displaystyle d_{j}=(w_{1,j},w_{2,j},\dotsc ,w_{n,j})}
q = ( w 1 , q , w 2 , q , , w n , q ) {\displaystyle q=(w_{1,q},w_{2,q},\dotsc ,w_{n,q})}

Each dimension corresponds to a separate term. If a term occurs in the document, its value in the vector is non-zero. Several different ways of computing these values, also known as (term) weights, have been developed. One of the best known schemes is tf-idf weighting (see the example below).

The definition of term depends on the application. Typically terms are single words, keywords, or longer phrases. If words are chosen to be the terms, the dimensionality of the vector is the number of words in the vocabulary (the number of distinct words occurring in the corpus).

Vector operations can be used to compare documents with queries.

Applications

Candidate documents from the corpus can be retrieved and ranked using a variety of methods. Relevance rankings of documents in a keyword search can be calculated, using the assumptions of document similarities theory, by comparing the deviation of angles between each document vector and the original query vector where the query is represented as a vector with same dimension as the vectors that represent the other documents.

In practice, it is easier to calculate the cosine of the angle between the vectors, instead of the angle itself:

cos θ = d 2 q d 2 q {\displaystyle \cos {\theta }={\frac {\mathbf {d_{2}} \cdot \mathbf {q} }{\left\|\mathbf {d_{2}} \right\|\left\|\mathbf {q} \right\|}}}

Where d 2 q {\displaystyle \mathbf {d_{2}} \cdot \mathbf {q} } is the intersection (i.e. the dot product) of the document (d2 in the figure to the right) and the query (q in the figure) vectors, d 2 {\displaystyle \left\|\mathbf {d_{2}} \right\|} is the norm of vector d2, and q {\displaystyle \left\|\mathbf {q} \right\|} is the norm of vector q. The norm of a vector is calculated as such:

q = i = 1 n q i 2 {\displaystyle \left\|\mathbf {q} \right\|={\sqrt {\sum _{i=1}^{n}q_{i}^{2}}}}

Using the cosine the similarity between document dj and query q can be calculated as:

c o s ( d j , q ) = d j q d j q = i = 1 N d i , j q i i = 1 N d i , j 2 i = 1 N q i 2 {\displaystyle \mathrm {cos} (d_{j},q)={\frac {\mathbf {d_{j}} \cdot \mathbf {q} }{\left\|\mathbf {d_{j}} \right\|\left\|\mathbf {q} \right\|}}={\frac {\sum _{i=1}^{N}d_{i,j}q_{i}}{{\sqrt {\sum _{i=1}^{N}d_{i,j}^{2}}}{\sqrt {\sum _{i=1}^{N}q_{i}^{2}}}}}}

As all vectors under consideration by this model are element-wise nonnegative, a cosine value of zero means that the query and document vector are orthogonal and have no match (i.e. the query term does not exist in the document being considered). See cosine similarity for further information.

Term frequency-inverse document frequency weights

In the classic vector space model proposed by Salton, Wong and Yang the term-specific weights in the document vectors are products of local and global parameters. The model is known as term frequency-inverse document frequency model. The weight vector for document d is v d = [ w 1 , d , w 2 , d , , w N , d ] T {\displaystyle \mathbf {v} _{d}=^{T}} , where

w t , d = t f t , d log | D | | { d D | t d } | {\displaystyle w_{t,d}=\mathrm {tf} _{t,d}\cdot \log {\frac {|D|}{|\{d'\in D\,|\,t\in d'\}|}}}

and

  • t f t , d {\displaystyle \mathrm {tf} _{t,d}} is term frequency of term t in document d (a local parameter)
  • log | D | | { d D | t d } | {\displaystyle \log {\frac {|D|}{|\{d'\in D\,|\,t\in d'\}|}}} is inverse document frequency (a global parameter). | D | {\displaystyle |D|} is the total number of documents in the document set; | { d D | t d } | {\displaystyle |\{d'\in D\,|\,t\in d'\}|} is the number of documents containing the term t.

Advantages

The vector space model has the following advantages over the Standard Boolean model:

  1. Allows ranking documents according to their possible relevance
  2. Allows retrieving items with a partial term overlap

Most of these advantages are a consequence of the difference in the density of the document collection representation between Boolean and term frequency-inverse document frequency approaches. When using Boolean weights, any document lies in a vertex in a n-dimensional hypercube. Therefore, the possible document representations are 2 n {\displaystyle 2^{n}} and the maximum Euclidean distance between pairs is n {\displaystyle {\sqrt {n}}} . As documents are added to the document collection, the region defined by the hypercube's vertices become more populated and hence denser. Unlike Boolean, when a document is added using term frequency-inverse document frequency weights, the inverse document frequencies of the terms in the new document decrease while that of the remaining terms increase. In average, as documents are added, the region where documents lie expands regulating the density of the entire collection representation. This behavior models the original motivation of Salton and his colleagues that a document collection represented in a low density region could yield better retrieval results.

Limitations

The vector space model has the following limitations:

  1. Query terms are assumed to be independent, so phrases might not be represented well in the ranking
  2. Semantic sensitivity; documents with similar context but different term vocabulary won't be associated

Many of these difficulties can, however, be overcome by the integration of various tools, including mathematical techniques such as singular value decomposition and lexical databases such as WordNet.

Models based on and extending the vector space model

Models based on and extending the vector space model include:

Software that implements the vector space model

The following software packages may be of interest to those wishing to experiment with vector models and implement search services based upon them.

Free open source software

Further reading

See also

References

  1. Berry, Michael W.; Drmac, Zlatko; Jessup, Elizabeth R. (January 1999). "Matrices, Vector Spaces, and Information Retrieval". SIAM Review. 41 (2): 335–362. doi:10.1137/s0036144598347035.
  2. ^ Büttcher, Stefan; Clarke, Charles L. A.; Cormack, Gordon V. (2016). Information retrieval: implementing and evaluating search engines (First MIT Press paperback ed.). Cambridge, Massachusetts London, England: The MIT Press. ISBN 978-0-262-52887-0.
  3. G. Salton , A. Wong , C. S. Yang, A vector space model for automatic indexing, Communications of the ACM, v.18 n.11, p.613–620, Nov. 1975
Category: