Pular para o conteúdo principal

Embedding-based Query Language Models - Leitura de Artigo

Use of word embeddings to enhance the accuracy of query language models in the ad-hoc retrieval task. To this end, we propose to use word embeddings to incorporate and weight terms that do not occur in the query, but are semantically related to the query terms.

Word2vec and GloVe are examples of successful implementations of word embeddings that respecively use neural networks and matrix factorization to learn embedding vectors.

Vocabulary mismatch problem, i.e., the mismatch of different vocabulary terms with the same concept.  This is a fundamental IR problem, since users often use different words to describe a concept in the queries than those that authors of documents use to describe the same concept.

In addition to the terms that appear in the query, we incorporate and weight the words that do not occur in the query, but are semantically similar to the query terms. To do so, we propose two query expansion models with different simplifying assumptions. 

A well-known and effective technique in ad-hoc information retrieval to address the vocabulary mismatch problem is pseudo-relevance feedback (PRF). It assumes that a small set of top-retrieved documents is relevant to the query, and thus a number of relevant terms can be selected from this set of feedback documents (also called pseudo-relevant documents) to be added to the query model. 

Query expansion is the process of adding relevant terms to a query to improve the retrieval effectiveness. There are a number of query expansion methods based on linguistic resources, such as WordNet, but they have not substantially improved the retrieval performance as they are based on global analysis. Global analysis often relies on external resources or document collections. On the other hand, local analysis expands queries using the documents that are related to them, like top-retrieved documents. Query expansion via pseudo-relevance feedback is a common technique used to improve retrieval effectiveness in many retrieval models.

The idea behind the word embedding techniques whose purpose is to capture the semantic similarity between vocabulary terms. We propose to use the sigmoid function over the well-known similarity metrics to achieve more discriminative similarity values between terms.
 
Word embedding techniques learn a low-dimensional vector (compared to the vocabulary size) for each vocabulary term in which the similarity between the word vectors can show the semantic as well as the syntactic similarities between the corresponding words. This is why word embeddings are also called distributed semantic representations. Note that the word embedding methods are categorized as unsupervised learning algorithms, since they only need a large amount of raw textual data in their training phase.A popular method to compute these word embedding vectors is neural network-based language models.

Mikolov et al. introduced word2vec, an embedding method that learns word vectors via a neural network with a single hidden layer.  Another successful trend in learning semantic word representations is employing global matrix factorization over the word-word matrices. GloVe is an example of such methods.

The semantic similarity between terms are often computed using the cosine similarity of the word embedding vectors.

The similarity values are not discriminative enough. On the other hand, as extensively explored invarious NLP tasks, the order of words in terms of their semantic similarity to a given term is accurate, especially for the very close terms. Therefore, we need a monotone mapping function to transform the similarity scores achieved by the popular similarity metrics, such as the cosine similarity.

The sigmoid function is a non-linear mapping function that maps values in [−∞,+∞] to the [0,1] interval.  This function, which has been used in many machine learning algorithms, such as the logistic regression and neural networks.

Comentários

Postagens mais visitadas deste blog

Connected Papers: Uma abordagem alternativa para revisão da literatura

Durante um projeto de pesquisa podemos encontrar um artigo que nos identificamos em termos de problema de pesquisa e também de solução. Então surge a vontade de saber como essa área de pesquisa se desenvolveu até chegar a esse ponto ou quais desdobramentos ocorreram a partir dessa solução proposta para identificar o estado da arte nesse tema. Podemos seguir duas abordagens:  realizar uma revisão sistemática usando palavras chaves que melhor caracterizam o tema em bibliotecas digitais de referência para encontrar artigos relacionados ou realizar snowballing ancorado nesse artigo que identificamos previamente, explorando os artigos citados (backward) ou os artigos que o citam (forward)  Mas a ferramenta Connected Papers propõe uma abordagem alternativa para essa busca. O problema inicial é dado um artigo de interesse, precisamos encontrar outros artigos relacionados de "certa forma". Find different methods and approaches to the same subject Track down the state of the art rese...

Knowledge Graphs as a source of trust for LLM-powered enterprise question answering - Leitura de Artigo

J. Sequeda, D. Allemang and B. Jacob, Knowledge Graphs as a source of trust for LLM-powered enterprise question answering, Web Semantics: Science, Services and Agents on the World Wide Web (2025), doi: https://doi.org/10.1016/j.websem.2024.100858. 1. Introduction These question answering systems that enable to chat with your structured data hold tremendous potential for transforming the way self service and data-driven decision making is executed within enterprises. Self service and data-driven decision making in organizations today is largly made through Business Intelligence (BI) and analytics reporting. Data teams gather the original data, integrate the data, build a SQL data warehouse (i.e. star schemas), and create BI dashboards and reports that are then used by business users and analysts to answer specific questions (i.e. metrics, KPIs) and make decisions. The bottleneck of this approach is that business users are only able to answer questions given the views of existing dashboa...

Knowledge Graph Toolkit (KGTK)

https://kgtk.readthedocs.io/en/latest/ KGTK represents KGs using TSV files with 4 columns labeled id, node1, label and node2. The id column is a symbol representing an identifier of an edge, corresponding to the orange circles in the diagram above. node1 represents the source of the edge, node2 represents the destination of the edge, and label represents the relation between node1 and node2. >> Quad do RDF, definir cada tripla como um grafo   KGTK defines knowledge graphs (or more generally any attributed graph or hypergraph ) as a set of nodes and a set of edges between those nodes. KGTK represents everything of meaning via an edge. Edges themselves can be attributed by having edges asserted about them, thus, KGTK can in fact represent arbitrary hypergraphs. KGTK intentionally does not distinguish attributes or qualifiers on nodes and edges from full-fledged edges, tools operating on KGTK graphs can instead interpret edges differently if they so desire. In KGTK, e...