Use of word embeddings to enhance the accuracy of query language models in the ad-hoc retrieval task. To this end, we propose to use word embeddings to incorporate and weight terms that do not occur in the query, but are semantically related to the query terms.
Word2vec and GloVe are examples of successful implementations of word embeddings that respecively use neural networks and matrix factorization to learn embedding vectors.
Vocabulary mismatch problem, i.e., the mismatch of different vocabulary terms with the same concept. This is a fundamental IR problem, since users often use different words to describe a concept in the queries than those that authors of documents use to describe the same concept.
In addition to the terms that appear in the query, we incorporate and weight the words that do not occur in the query, but are semantically similar to the query terms. To do so, we propose two query expansion models with different simplifying assumptions.
A well-known and effective technique in ad-hoc information retrieval to address the vocabulary mismatch problem is pseudo-relevance feedback (PRF). It assumes that a small set of top-retrieved documents is relevant to the query, and thus a number of relevant terms can be selected from this set of feedback documents (also called pseudo-relevant documents) to be added to the query model.
Query expansion is the process of adding relevant terms to a query to improve the retrieval effectiveness. There are a number of query expansion methods based on linguistic resources, such as WordNet, but they have not substantially improved the retrieval performance as they are based on global analysis. Global analysis often relies on external resources or document collections. On the other hand, local analysis expands queries using the documents that are related to them, like top-retrieved documents. Query expansion via pseudo-relevance feedback is a common technique used to improve retrieval effectiveness in many retrieval models.
The idea behind the word embedding techniques whose purpose is to capture the semantic similarity between vocabulary terms. We propose to use the sigmoid function over the well-known similarity metrics to achieve more discriminative similarity values between terms.
Word embedding techniques learn a low-dimensional vector (compared to the vocabulary size) for each vocabulary term in which the similarity between the word vectors can show the semantic as well as the syntactic similarities between the corresponding words. This is why word embeddings are also called distributed semantic representations. Note that the word embedding methods are categorized as unsupervised learning algorithms, since they only need a large amount of raw textual data in their training phase.A popular method to compute these word embedding vectors is neural network-based language models.
Mikolov et al. introduced word2vec, an embedding method that learns word vectors via a neural network with a single hidden layer. Another successful trend in learning semantic word representations is employing global matrix factorization over the word-word matrices. GloVe is an example of such methods.
The semantic similarity between terms are often computed using the cosine similarity of the word embedding vectors.
The similarity values are not discriminative enough. On the other hand, as extensively explored invarious NLP tasks, the order of words in terms of their semantic similarity to a given term is accurate, especially for the very close terms. Therefore, we need a monotone mapping function to transform the similarity scores achieved by the popular similarity metrics, such as the cosine similarity.
The sigmoid function is a non-linear mapping function that maps values in [−∞,+∞] to the [0,1] interval. This function, which has been used in many machine learning algorithms, such as the logistic regression and neural networks.
Comentários
Postar um comentário
Sinta-se a vontade para comentar. Críticas construtivas são sempre bem vindas.