Pular para o conteúdo principal

Foundations of RDF⋆ and SPARQL⋆ - Leitura de Artigo

Hartig, Olaf. “Foundations of RDF⋆ and SPARQL⋆ (An Alternative Approach to Statement-Level Metadata in RDF).” AMW (2017).
 
The term statement-level metadata refers to a form of data that captures information about another piece of data representing a single statement or fact. A typical example are so called edge properties in graph databases; such an edge property takes the form of a key-value pair that captures additional information about the relationship represented by the edge with which the key-value pair is associated
 
While the Resource Description Framework (RDF) [1] presents another graph-based approach to represent statements about entities and their relationships, its triple-based data model does not natively support the representation of statement-level meta-data. To mitigate this limitation the RDF specification introduces the notion of RDF reification which can be used to provide a set of RDF triples that describe some other RDF triple  
 
The example highlights two major shortcomings of RDF reification: First, adding four reification triples for every reified triple is inefficient for exchanging RDF data. Second, writing queries to access statement-level metadata is cumbersome because any metadata-related (sub)expression in a query has to be accompanied by another subexpression to match the corresponding four reification triples.
 
That is, in the extended language, called SPARQL*, triple patterns may also be nested, which gives users a query syntax in which accessing specific metadata about a triple is just a matter of mentioning the triple in the subject (or object) position of a metadata-related triple pattern.
 
As a basis for the following definitions, we assume pairwise disjoint sets I (all IRIs), B(blank nodes), and L(literals). As usual, a tuple in (I∪B)×I×(I∪B∪L) is an RDF triple and a set of RDF triples is an RDF graph [1]. RDF* extends the notion of such triples by allowing for triples that have another triple in its subject or its object position. Such nesting may be arbitrarily deep. The following definition captures this notion.
 
Definition 1. An RDF* triple is a 3-tuple that is defined recursively as follows:
1. Any RDF triple t (I∪B) ×I×(I∪B∪L) is an RDF* triple; and
2. Given RDF* triples t and t, and RDF terms s (I ∪ B), p ∈ I and o (I ∪ B ∪ L), then the tuples (t,p,o), (s,p,t) and (t,p,t) are RDF* triples.
 
We recall that the basic building block of SPARQL queries is a basic graph pattern (BGP); that is, a finite set of triple patterns, where every triple pattern is a tuple of the form (s,p,o) (V ∪ I ∪ L) ×(V ∪ I) ×(V ∪ I ∪ L) with V being a set of query variables that is disjoint from I, B, and L, respectively. SPARQL? extends these patterns by adding the possibility to nest triple patterns within one another.
 
Definition 2. A triple* pattern is a 3-tuple that is defined recursively as follows:
1. Any triple pattern t (V ∪ I ∪ L) × (V ∪ I) × (V ∪ I ∪ L) is a triple* pattern; and
2. Given two triple* patterns tp and tp', and s (V ∪ I ∪ L), p (V ∪ I) and o (V ∪ I ∪ L), then (tp,p,o), (s,p,tp) and (tp,p,tp') are triple* patterns.
 
Before defining a query semantics of SPARQL*, we recall that the semantics of SPARQL is defined based on the notion of solution mappings [2,7], that is, partial mappings μ: V → (I ∪ B ∪ L). For SPARQL* we extend this notion to so-called solution* mappings that may bind variables not only to IRIs, blank nodes, or literals, but also to RDF* triples. Hence, a solution* mapping η is a partial mapping η : V →(T* ∪ I ∪ B ∪ L).
 
A triple-to-ID mapping id is an injective function id : T* (I ∪ B).
   
 
 

Comentários

Postagens mais visitadas deste blog

Aula 12: WordNet | Introdução à Linguagem de Programação Python *** com NLTK

 Fonte -> https://youtu.be/0OCq31jQ9E4 A WordNet do Brasil -> http://www.nilc.icmc.usp.br/wordnetbr/ NLTK  synsets = dada uma palavra acha todos os significados, pode informar a língua e a classe gramatical da palavra (substantivo, verbo, advérbio) from nltk.corpus import wordnet as wn wordnet.synset(xxxxxx).definition() = descrição do significado É possível extrair hipernimia, hiponimia, antonimos e os lemas (diferentes palavras/expressões com o mesmo significado) formando uma REDE LEXICAL. Com isso é possível calcular a distância entre 2 synset dentro do grafo.  Veja trecho de código abaixo: texto = 'útil' print('NOUN:', wordnet.synsets(texto, lang='por', pos=wordnet.NOUN)) texto = 'útil' print('ADJ:', wordnet.synsets(texto, lang='por', pos=wordnet.ADJ)) print(wordnet.synset('handy.s.01').definition()) texto = 'computador' for synset in wn.synsets(texto, lang='por', pos=wn.NOUN):     print('DEF:',s

truth makers AND truth bearers - Palestra Giancarlo no SBBD

Dando uma googada https://iep.utm.edu/truth/ There are two commonly accepted constraints on truth and falsehood:     Every proposition is true or false.         [Law of the Excluded Middle.]     No proposition is both true and false.         [Law of Non-contradiction.] What is the difference between a truth-maker and a truth bearer? Truth-bearers are either true or false; truth-makers are not since, not being representations, they cannot be said to be true, nor can they be said to be false . That's a second difference. Truth-bearers are 'bipolar,' either true or false; truth-makers are 'unipolar': all of them obtain. What are considered truth bearers?   A variety of truth bearers are considered – statements, beliefs, claims, assumptions, hypotheses, propositions, sentences, and utterances . When I speak of a fact . . . I mean the kind of thing that makes a proposition true or false. (Russell, 1972, p. 36.) “Truthmaker theories” hold that in order for any truthbe

DGL-KE : Deep Graph Library (DGL)

Fonte: https://towardsdatascience.com/introduction-to-knowledge-graph-embedding-with-dgl-ke-77ace6fb60ef Amazon recently launched DGL-KE, a software package that simplifies this process with simple command-line scripts. With DGL-KE , users can generate embeddings for very large graphs 2–5x faster than competing techniques. DGL-KE provides users the flexibility to select models used to generate embeddings and optimize performance by configuring hardware, data sampling parameters, and the loss function. To use this package effectively, however, it is important to understand how embeddings work and the optimizations available to compute them. This two-part blog series is designed to provide this information and get you ready to start taking advantage of DGL-KE . Finally, another class of graphs that is especially important for knowledge graphs are multigraphs . These are graphs that can have multiple (directed) edges between the same pair of nodes and can also contain loops. The