Pular para o conteúdo principal

Knowledge Graph Toolkit (KGTK)

https://kgtk.readthedocs.io/en/latest/

Diagram

KGTK represents KGs using TSV files with 4 columns labeled id, node1, label and node2. The id column is a symbol representing an identifier of an edge, corresponding to the orange circles in the diagram above. node1 represents the source of the edge, node2 represents the destination of the edge, and label represents the relation between node1 and node2.

>> Quad do RDF, definir cada tripla como um grafo

Diagram>< 

KGTK defines knowledge graphs (or more generally any attributed graph or hypergraph) as a set of nodes and a set of edges between those nodes. KGTK represents everything of meaning via an edge. Edges themselves can be attributed by having edges asserted about them, thus, KGTK can in fact represent arbitrary hypergraphs. KGTK intentionally does not distinguish attributes or qualifiers on nodes and edges from full-fledged edges, tools operating on KGTK graphs can instead interpret edges differently if they so desire. In KGTK, everything can be a node, and every node can have any type of edge to any other node.


import-ntriples: This command will import one or more ntriple files into KGTK format.

>> Existem comando de importação para KGs como Wikidata

The generate-wikidata-triples command generates triple files from a kgtk file. The generated triple files can then be loaded into a triple store directly.

The triple generator reads a tab-separated kgtk file from standard input, by default, or a given file. The kgtk file is required to have at least the following 4 fields: node1, label, node2 and id. The node1 field is the subject; label is the predicate and node2 is the object. 

>> Converter Wikidata em RDF para carregar em TripleStore

Transformation commands
calc: this command performs calculations on one or more columns in a KGTK file. The output of a calculation can be written into an existing column or into a new column, which will be added after all existing columns.

lexicalize builds English sentences from KGTK edge files. The primary purpose of this command is to construct inputs for text-based distance vector analysis. However, it may also prove useful for explaining the contents of local subsets of Knowledge Graphs.

Curation commands
validate-properties validates and filter property patterns in a KGTK file. We want to be able to detect violations of various constraint patterns.
An existing system, SHACL, is an RDF-based constraint system. We'd like KGTK to have something that is both easier for new users than RDF and more efficient to run.

Analysis
find the connected components in a KGTK edge file.
compute the the embeddings of this files' entities. We are using structure of nodes and their relations to compute embeddings of nodes. The set of metrics to compute are specified by the user. There are three supported formats: glove, w2v, and kgtk. The algorithm by default is ComplEx (also supports TransE, DistMult, or RESCAL)
compute centrality metrics and connectivity statistics.
computes paths between each pair of source-target nodes
find all nodes reachable from given root nodes in a KGTK edge file. That is, given a set of nodes N and a set of properties P, this command computes the set of nodes R that can be reached from N via paths containing any of the properties in P.
Computes embeddings of nodes using properties of nodes. The values are concatenated into sentences defined by a template, and embedded using a pre-trained language model.

Artigo -> https://arxiv.org/pdf/2006.00088.pdf


Comentários

Postar um comentário

Sinta-se a vontade para comentar. Críticas construtivas são sempre bem vindas.

Postagens mais visitadas deste blog

Connected Papers: Uma abordagem alternativa para revisão da literatura

Durante um projeto de pesquisa podemos encontrar um artigo que nos identificamos em termos de problema de pesquisa e também de solução. Então surge a vontade de saber como essa área de pesquisa se desenvolveu até chegar a esse ponto ou quais desdobramentos ocorreram a partir dessa solução proposta para identificar o estado da arte nesse tema. Podemos seguir duas abordagens:  realizar uma revisão sistemática usando palavras chaves que melhor caracterizam o tema em bibliotecas digitais de referência para encontrar artigos relacionados ou realizar snowballing ancorado nesse artigo que identificamos previamente, explorando os artigos citados (backward) ou os artigos que o citam (forward)  Mas a ferramenta Connected Papers propõe uma abordagem alternativa para essa busca. O problema inicial é dado um artigo de interesse, precisamos encontrar outros artigos relacionados de "certa forma". Find different methods and approaches to the same subject Track down the state of the art rese...

Busca por Wikidata no DBLP

Fiz uma pesquisa por Wikidata no DBLP em 19/04/2021 para identificar que tipos de pesquisas estão sendo feitas com e sobre Wikidata. Foram encontrados artigos de 2012 a 2021. Baixei as referências em formato bibTex para usar no Mendeley.   O primeiro artigo da conferência WWW de 2012 oficializa o lançamento da iniciativa pelo WMF.  Denny Vrandečić. 2012. Wikidata: a new platform for collaborative data collection. In Proceedings of the 21st International Conference on World Wide Web (WWW '12 Companion). Association for Computing Machinery, New York, NY, USA, 1063–1064. DOI:https://doi.org/10.1145/2187980.2188242  Na International Semantic Web Conference (ISWC) de 2015 um artigo comparativo com 4 abordagens de reificação dos dados da Wikidata standard reification (sr) whereby an RDF resource is used to denote the triple itself, denoting its subject, predicate and object as attributes and allowing additional meta-information to be added.  n-ary relations (nr) w...

Embedding Logical Queries on Knowledge Graphs - Leitura de Artigo

William L. Hamilton, Payal Bajaj, Marinka Zitnik, Dan Jurafsky, Jure Leskovec: Embedding Logical Queries on Knowledge Graphs . NeurIPS 2018: 2030-2041 Abstract Learning low-dimensional embeddings of knowledge graphs is a powerful approach used to predict unobserved or missing edges between entities. However, an open challenge in this area is developing techniques that can go beyond simple edge prediction and handle more complex logical queries, which might involve multiple unobserved edges, entities, and variables. [ Link Prediction é a tarefa mais comum em GRL, é uma query do tipo <s, p, ?o> ou <s, ?p, o> ou <?s, p, o>, ou seja, Look up ou Existe <s, p, o> (ASK) ] For instance, given an incomplete biological knowledge graph, we might want to predict what drugs are likely to target proteins involved with both diseases X and Y? —a query that requires reasoning about all possible proteins that might interact with diseases X and Y. [ Query conjuntiva, BGP com join ...