Pular para o conteúdo principal

GAKE: Graph Aware Knowledge Embedding - Leitura de Artigo

 http://yangy.org/works/gake/gake-coling16.pdf
 
Jun Feng, Minlie Huang, Yang Yang, and Xiaoyan Zhu. 2016. GAKE: Graph Aware Knowledge Embedding. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 641–651, Osaka, Japan. The COLING 2016 Organizing Committee. 

GitHub -> https://github.com/JuneFeng/GAKE
 
Abstract
 
In this paper, we propose a graph aware knowledge embedding method (GAKE), which formulates knowledge base as a directed graph, and learns representations for any vertices or edges by leveraging the graph’s structural information. We introduce three types of graph context for embedding: neighbor context, path context, and edge context, each reflects properties of knowledge from different perspectives.  
 
[Aqui o contexto é dado pela estrutura do grafo, as interligações. É sintático ou semântico? ]
 
1 Introduction
 
In this way, we see that most of existing methods only consider “one hop” information about directed linked entities while miss more global information, such as multiple-steps paths, K-degree neighbors of a given vertex, etc. We call these different structural information as graph context inspired by textural context utilized in learning a given word’s representation (Tomas Mikolov, 2013).
   
Our contributions in this work include: (1) We treat a given knowledge base as a directed graph instead of a set of independent triples, and extract different types of graph context to study the representation of knowledge. (2) We propose a novel and general representation learning approach, GAKE (Graph Aware Knowledge Embedding), which can be easily extended to consider any type of graph context. (3) We propose an attention mechanism in our approach to learn representation power of different entities and relations.
 
[Seria possível incluir os qualificadores das dimensões contextuais e do contexto relativo? ]
 
2 Related Work
 
Above knowledge base embedding models all treat the knowledge base as a set of triples. However, in fact, knowledge base is a graph with its graph structure which can be used to better embed the entities and relations in knowledge base.
 
[As triplas sozinhas não transmitem a informação completa]
 
3 Our Approach
 
[Triplas, não é hiper relacional]
 
Definition 2 (Graph Context) Given a subject si, its graph context c(si) is a set of other subjects relevant to si: {sw|sw S, sw relevant to si}.
 
[O contexto de um vértice sujeito são outros vértices sujeitos ou objetos e as relações. Não entram as propriedades do nó e nem o valor dessas propriedades?]
 
Different types of graph context defines the “relevance” between subjects differently. In this work, we use three types of graph context as examples, which will be introduced in detail later.
 
Neighbor context. Given a subject si, taking an entity as an example, we regard each of its out-neighbors, along with their relations, as the neighbor context.
 
[Todos os nós vizinhos e as relações, não tem filtro]
 
Path context. A path in a given knowledge graph reflects both direct and indirect relations between entities. For example, the path v1 - BornInCity v2 - CityInState v3 - StateInCountry v4 indicates the relation “Nationality” between v1 and v4. In this work, given a subject si, we use random walk to collect several paths starting from si.
 
[Caminhos aleatórios] 
 
Edge context. All relations connecting a given entity are representative to that entity, while all entities linked with a given relation are also able to represent that relation. For example, a relation connected with “United Kingdom”, “France”, “China”, and “United States” is most likely to be “Nationality”. We define he edge context cE (si) of a subject si as all other subjects directly linked with si. When si is a vertex, cE (si) is a set of edges of si, while when si is an edge, cE (si) consists of all vertices connected with si. 
 
[Todos as relações que saem de um nó ou todos os nós ligados por uma relação, não tem filtro]
[Somente estrutura, não considera a semântica das relações]
 
Context extension. To utilize other types of graph context, one could first define c(si) and the algorithm used to extract the context from the given knowledge graph G. After that, the remaining steps for knowledge representation learning would be exactly the same with other types of graph context. Thus, our framework is general and flexible to extend different types of graph context easily.
 
[A extensão poderia ser por dimensão contextual]
 
To utilize these three types of context, we combine them by jointly maximizing the objective functions 
 
4 Experiments
 
We evaluate our proposed approach with two experiments: (1) triple classification (Bordes et al., 2013; Wang et al., 2014; Lin et al., 2015b), which determines whether a given triple is correct or not, and (2) link prediction (Wang et al., 2014; Xiao et al., 2016b), which aims to predict missing entities.
 
[Resolve tarefas de KGC - KG Completion]
          

Comentários

Postagens mais visitadas deste blog

Aula 12: WordNet | Introdução à Linguagem de Programação Python *** com NLTK

 Fonte -> https://youtu.be/0OCq31jQ9E4 A WordNet do Brasil -> http://www.nilc.icmc.usp.br/wordnetbr/ NLTK  synsets = dada uma palavra acha todos os significados, pode informar a língua e a classe gramatical da palavra (substantivo, verbo, advérbio) from nltk.corpus import wordnet as wn wordnet.synset(xxxxxx).definition() = descrição do significado É possível extrair hipernimia, hiponimia, antonimos e os lemas (diferentes palavras/expressões com o mesmo significado) formando uma REDE LEXICAL. Com isso é possível calcular a distância entre 2 synset dentro do grafo.  Veja trecho de código abaixo: texto = 'útil' print('NOUN:', wordnet.synsets(texto, lang='por', pos=wordnet.NOUN)) texto = 'útil' print('ADJ:', wordnet.synsets(texto, lang='por', pos=wordnet.ADJ)) print(wordnet.synset('handy.s.01').definition()) texto = 'computador' for synset in wn.synsets(texto, lang='por', pos=wn.NOUN):     print('DEF:',s...

truth makers AND truth bearers - Palestra Giancarlo no SBBD

Dando uma googada https://iep.utm.edu/truth/ There are two commonly accepted constraints on truth and falsehood:     Every proposition is true or false.         [Law of the Excluded Middle.]     No proposition is both true and false.         [Law of Non-contradiction.] What is the difference between a truth-maker and a truth bearer? Truth-bearers are either true or false; truth-makers are not since, not being representations, they cannot be said to be true, nor can they be said to be false . That's a second difference. Truth-bearers are 'bipolar,' either true or false; truth-makers are 'unipolar': all of them obtain. What are considered truth bearers?   A variety of truth bearers are considered – statements, beliefs, claims, assumptions, hypotheses, propositions, sentences, and utterances . When I speak of a fact . . . I mean the kind of thing that makes a proposition true or false. (Russe...

DGL-KE : Deep Graph Library (DGL)

Fonte: https://towardsdatascience.com/introduction-to-knowledge-graph-embedding-with-dgl-ke-77ace6fb60ef Amazon recently launched DGL-KE, a software package that simplifies this process with simple command-line scripts. With DGL-KE , users can generate embeddings for very large graphs 2–5x faster than competing techniques. DGL-KE provides users the flexibility to select models used to generate embeddings and optimize performance by configuring hardware, data sampling parameters, and the loss function. To use this package effectively, however, it is important to understand how embeddings work and the optimizations available to compute them. This two-part blog series is designed to provide this information and get you ready to start taking advantage of DGL-KE . Finally, another class of graphs that is especially important for knowledge graphs are multigraphs . These are graphs that can have multiple (directed) edges between the same pair of nodes and can also contain loops. The...