Pular para o conteúdo principal

Knowledge Graphs Querying - Leitura de Artigo

Arijit Khan. 2023. Knowledge Graphs Querying. SIGMOD Rec. 52, 2 (June 2023), 18–29. https://doi.org/10.1145/3615952.3615956

ABSTRACT
Querying KGs is critical in web search, question answering (QA), semantic search, personal assistants, fact checking, and recommendation.

[Sistemas / tarefas onde consulta aos KGs é usada]

First, research on KG querying has been conducted by several communities, such as databases, data mining, semantic web, machine learning, information retrieval, and natural language processing (NLP), with different focus and terminologies; and also in diverse topics ranging from graph databases, query languages, join algorithms, graph patterns matching, to more sophisticated KG embedding and natural language questions (NLQs).

[Diversas perspectivas sobre os problemas que as consultas em KG trazem]

Second, many recent advances on KG and query embedding, multimodal KG, and KG-QA come from deep learning, IR, NLP, and computer vision domains. 

[De quais comunidades estão sendo propostas soluções e quais os problemas que ainda estão em aberto como por exemplo lidar com Incompletude]

1 Introduction

[KG para integração de fontes de dados com esquema flexível]

1.1 Challenges in KG Querying

Scalability and efficiency of graph query processing ....

Additionally, the notion of ‘relevant’ or ‘correct’ answers could very well depend on the user’s query intent, or can even be vague, thus a predefined, ‘one-size-fits-all’ similarity metric might not work in all scenarios.

[As abordagens de similaridade não dependem da tarefa]

Incomplete KGs. Knowledge graphs are incomplete and follow the open-world assumption — information
available in a KG only captures a subset of reality. To retrieve the complete set of correct answers for a given query, one must infer missing edges and relations.

[Completar o KG com dados do próprio KG ou de fontes externas] 

User-friendly querying

[Formulação interativa de consultas, completar, explicar]

2 Taxonomy of KG Querying

Graph workloads are broadly classified into two categories [67]: (1) online graph queries consisting of adhoc graph traversal and pattern matching – exploring a small fraction of the entire graph and requiring fast response time; (2) offline graph analytics with iterative, batch processing over the entire graph, e.g., PageRank, clustering, community detection, and machine learning algorithms.

The focus of this article is read-only online queries without updates in the KG. KG querying is essential for web search [129], QA [119], semantic search [155], personal assistants [12], fact checking [143], and recommendation [167].

[CaKQ Query Engine é online graph query] 

2.1 KG Data Models

[RDF e LPG]

2.2 KG Query and Question Classification

[Traduzir Pergunta em Consulta]

[Consulta simples ou complexa. Conjunção, Disjunção, Negação, etc ... Caminhos]

[Factoides x Agregada/Abstrata]

2.3 KG Query Languages & Technologies

[SPARQL, Cypher, GQL, Extensões para SQL, ...]

[Keyword]

2.4 Benchmarks for KG Query & QA

3 KG Query Processing & QA: Recent Neural Methods 

3.1 Embedding-based KG Query Processing

[Converter o KG e as consultas e achar a distância]

3.2 Multi-modal KG Embedding

3.3 Neural Methods for KG-QA

Answering natural language questions (NLQ) over knowledge graphs involve several subtasks including entity linking, relationships identification, identifying logical and numerical operators, query forms, intent, and finally the formal query construction [111]. Rule-based methods use ontologies and KG for phrase mapping and disambiguation to link entities and relations to the KG, and then employ grammars to generate formal queries.

Recently, neural network-based semantic parsing algorithms have become popular for KG-QA, which are categorized as classification, ranking, and translation-based [28]. 

3.4 Conversational QA on KG

4 Graph Databases Support for KG Query

5 Future Directions

Therefore, KGs can be a unified data model for complex data lake problems, to model cross-domain and diverse data.

Comentários

Postagens mais visitadas deste blog

Aula 12: WordNet | Introdução à Linguagem de Programação Python *** com NLTK

 Fonte -> https://youtu.be/0OCq31jQ9E4 A WordNet do Brasil -> http://www.nilc.icmc.usp.br/wordnetbr/ NLTK  synsets = dada uma palavra acha todos os significados, pode informar a língua e a classe gramatical da palavra (substantivo, verbo, advérbio) from nltk.corpus import wordnet as wn wordnet.synset(xxxxxx).definition() = descrição do significado É possível extrair hipernimia, hiponimia, antonimos e os lemas (diferentes palavras/expressões com o mesmo significado) formando uma REDE LEXICAL. Com isso é possível calcular a distância entre 2 synset dentro do grafo.  Veja trecho de código abaixo: texto = 'útil' print('NOUN:', wordnet.synsets(texto, lang='por', pos=wordnet.NOUN)) texto = 'útil' print('ADJ:', wordnet.synsets(texto, lang='por', pos=wordnet.ADJ)) print(wordnet.synset('handy.s.01').definition()) texto = 'computador' for synset in wn.synsets(texto, lang='por', pos=wn.NOUN):     print('DEF:',s

truth makers AND truth bearers - Palestra Giancarlo no SBBD

Dando uma googada https://iep.utm.edu/truth/ There are two commonly accepted constraints on truth and falsehood:     Every proposition is true or false.         [Law of the Excluded Middle.]     No proposition is both true and false.         [Law of Non-contradiction.] What is the difference between a truth-maker and a truth bearer? Truth-bearers are either true or false; truth-makers are not since, not being representations, they cannot be said to be true, nor can they be said to be false . That's a second difference. Truth-bearers are 'bipolar,' either true or false; truth-makers are 'unipolar': all of them obtain. What are considered truth bearers?   A variety of truth bearers are considered – statements, beliefs, claims, assumptions, hypotheses, propositions, sentences, and utterances . When I speak of a fact . . . I mean the kind of thing that makes a proposition true or false. (Russell, 1972, p. 36.) “Truthmaker theories” hold that in order for any truthbe

DGL-KE : Deep Graph Library (DGL)

Fonte: https://towardsdatascience.com/introduction-to-knowledge-graph-embedding-with-dgl-ke-77ace6fb60ef Amazon recently launched DGL-KE, a software package that simplifies this process with simple command-line scripts. With DGL-KE , users can generate embeddings for very large graphs 2–5x faster than competing techniques. DGL-KE provides users the flexibility to select models used to generate embeddings and optimize performance by configuring hardware, data sampling parameters, and the loss function. To use this package effectively, however, it is important to understand how embeddings work and the optimizations available to compute them. This two-part blog series is designed to provide this information and get you ready to start taking advantage of DGL-KE . Finally, another class of graphs that is especially important for knowledge graphs are multigraphs . These are graphs that can have multiple (directed) edges between the same pair of nodes and can also contain loops. The