Pular para o conteúdo principal

Embedding-based Query Language Models - Leitura de Artigo

Use of word embeddings to enhance the accuracy of query language models in the ad-hoc retrieval task. To this end, we propose to use word embeddings to incorporate and weight terms that do not occur in the query, but are semantically related to the query terms.

Word2vec and GloVe are examples of successful implementations of word embeddings that respecively use neural networks and matrix factorization to learn embedding vectors.

Vocabulary mismatch problem, i.e., the mismatch of different vocabulary terms with the same concept.  This is a fundamental IR problem, since users often use different words to describe a concept in the queries than those that authors of documents use to describe the same concept.

In addition to the terms that appear in the query, we incorporate and weight the words that do not occur in the query, but are semantically similar to the query terms. To do so, we propose two query expansion models with different simplifying assumptions. 

A well-known and effective technique in ad-hoc information retrieval to address the vocabulary mismatch problem is pseudo-relevance feedback (PRF). It assumes that a small set of top-retrieved documents is relevant to the query, and thus a number of relevant terms can be selected from this set of feedback documents (also called pseudo-relevant documents) to be added to the query model. 

Query expansion is the process of adding relevant terms to a query to improve the retrieval effectiveness. There are a number of query expansion methods based on linguistic resources, such as WordNet, but they have not substantially improved the retrieval performance as they are based on global analysis. Global analysis often relies on external resources or document collections. On the other hand, local analysis expands queries using the documents that are related to them, like top-retrieved documents. Query expansion via pseudo-relevance feedback is a common technique used to improve retrieval effectiveness in many retrieval models.

The idea behind the word embedding techniques whose purpose is to capture the semantic similarity between vocabulary terms. We propose to use the sigmoid function over the well-known similarity metrics to achieve more discriminative similarity values between terms.
 
Word embedding techniques learn a low-dimensional vector (compared to the vocabulary size) for each vocabulary term in which the similarity between the word vectors can show the semantic as well as the syntactic similarities between the corresponding words. This is why word embeddings are also called distributed semantic representations. Note that the word embedding methods are categorized as unsupervised learning algorithms, since they only need a large amount of raw textual data in their training phase.A popular method to compute these word embedding vectors is neural network-based language models.

Mikolov et al. introduced word2vec, an embedding method that learns word vectors via a neural network with a single hidden layer.  Another successful trend in learning semantic word representations is employing global matrix factorization over the word-word matrices. GloVe is an example of such methods.

The semantic similarity between terms are often computed using the cosine similarity of the word embedding vectors.

The similarity values are not discriminative enough. On the other hand, as extensively explored invarious NLP tasks, the order of words in terms of their semantic similarity to a given term is accurate, especially for the very close terms. Therefore, we need a monotone mapping function to transform the similarity scores achieved by the popular similarity metrics, such as the cosine similarity.

The sigmoid function is a non-linear mapping function that maps values in [−∞,+∞] to the [0,1] interval.  This function, which has been used in many machine learning algorithms, such as the logistic regression and neural networks.

Comentários

Postagens mais visitadas deste blog

Connected Papers: Uma abordagem alternativa para revisão da literatura

Durante um projeto de pesquisa podemos encontrar um artigo que nos identificamos em termos de problema de pesquisa e também de solução. Então surge a vontade de saber como essa área de pesquisa se desenvolveu até chegar a esse ponto ou quais desdobramentos ocorreram a partir dessa solução proposta para identificar o estado da arte nesse tema. Podemos seguir duas abordagens:  realizar uma revisão sistemática usando palavras chaves que melhor caracterizam o tema em bibliotecas digitais de referência para encontrar artigos relacionados ou realizar snowballing ancorado nesse artigo que identificamos previamente, explorando os artigos citados (backward) ou os artigos que o citam (forward)  Mas a ferramenta Connected Papers propõe uma abordagem alternativa para essa busca. O problema inicial é dado um artigo de interesse, precisamos encontrar outros artigos relacionados de "certa forma". Find different methods and approaches to the same subject Track down the state of the art rese...

Aula 12: WordNet | Introdução à Linguagem de Programação Python *** com NLTK

 Fonte -> https://youtu.be/0OCq31jQ9E4 A WordNet do Brasil -> http://www.nilc.icmc.usp.br/wordnetbr/ NLTK  synsets = dada uma palavra acha todos os significados, pode informar a língua e a classe gramatical da palavra (substantivo, verbo, advérbio) from nltk.corpus import wordnet as wn wordnet.synset(xxxxxx).definition() = descrição do significado É possível extrair hipernimia, hiponimia, antonimos e os lemas (diferentes palavras/expressões com o mesmo significado) formando uma REDE LEXICAL. Com isso é possível calcular a distância entre 2 synset dentro do grafo.  Veja trecho de código abaixo: texto = 'útil' print('NOUN:', wordnet.synsets(texto, lang='por', pos=wordnet.NOUN)) texto = 'útil' print('ADJ:', wordnet.synsets(texto, lang='por', pos=wordnet.ADJ)) print(wordnet.synset('handy.s.01').definition()) texto = 'computador' for synset in wn.synsets(texto, lang='por', pos=wn.NOUN):     print('DEF:',s...

DGL-KE : Deep Graph Library (DGL)

Fonte: https://towardsdatascience.com/introduction-to-knowledge-graph-embedding-with-dgl-ke-77ace6fb60ef Amazon recently launched DGL-KE, a software package that simplifies this process with simple command-line scripts. With DGL-KE , users can generate embeddings for very large graphs 2–5x faster than competing techniques. DGL-KE provides users the flexibility to select models used to generate embeddings and optimize performance by configuring hardware, data sampling parameters, and the loss function. To use this package effectively, however, it is important to understand how embeddings work and the optimizations available to compute them. This two-part blog series is designed to provide this information and get you ready to start taking advantage of DGL-KE . Finally, another class of graphs that is especially important for knowledge graphs are multigraphs . These are graphs that can have multiple (directed) edges between the same pair of nodes and can also contain loops. The...