http://yangy.org/works/gake/gake-coling16.pdf
Jun Feng, Minlie Huang, Yang Yang, and Xiaoyan Zhu. 2016. GAKE: Graph Aware Knowledge Embedding. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 641–651, Osaka, Japan. The COLING 2016 Organizing Committee.
GitHub -> https://github.com/JuneFeng/GAKE
Abstract
In this paper, we propose a graph aware knowledge embedding method (GAKE), which formulates knowledge base as a directed graph, and learns representations for any vertices or edges by leveraging the graph’s structural information. We introduce three types of graph context for embedding: neighbor context, path context, and edge context, each reflects properties of knowledge from different perspectives.
[Aqui o contexto é dado pela estrutura do grafo, as interligações. É sintático ou semântico? ]
1 Introduction
In this way, we see that most of existing methods only consider “one hop” information about directed linked entities while miss more global information, such as multiple-steps paths, K-degree neighbors of a given vertex, etc. We call these different structural information as graph context inspired by textural context utilized in learning a given word’s representation (Tomas Mikolov, 2013).
Our contributions in this work include: (1) We treat a given knowledge base as a directed graph instead of a set of independent triples, and extract different types of graph context to study the representation of knowledge. (2) We propose a novel and general representation learning approach, GAKE (Graph Aware Knowledge Embedding), which can be easily extended to consider any type of graph context. (3) We propose an attention mechanism in our approach to learn representation power of different entities and relations.
[Seria possÃvel incluir os qualificadores das dimensões contextuais e do contexto relativo? ]
2 Related Work
Above knowledge base embedding models all treat the knowledge base as a set of triples. However, in fact, knowledge base is a graph with its graph structure which can be used to better embed the entities and relations in knowledge base.
[As triplas sozinhas não transmitem a informação completa]
3 Our Approach
[Triplas, não é hiper relacional]
Definition 2 (Graph Context) Given a subject si, its graph context c(si) is a set of other subjects relevant to si: {sw|sw ∈ S, sw relevant to si}.
[O contexto de um vértice sujeito são outros vértices sujeitos ou objetos e as relações. Não entram as propriedades do nó e nem o valor dessas propriedades?]
Different types of graph context defines the “relevance” between subjects differently. In this work, we use three types of graph context as examples, which will be introduced in detail later.
Neighbor context. Given a subject si, taking an entity as an example, we regard each of its out-neighbors, along with their relations, as the neighbor context.
[Todos os nós vizinhos e as relações, não tem filtro]
Path context. A path in a given knowledge graph reflects both direct and indirect relations between entities. For example, the path v1 - BornInCity → v2 - CityInState → v3 - StateInCountry → v4 indicates the relation “Nationality” between v1 and v4. In this work, given a subject si, we use random walk to collect several paths starting from si.
[Caminhos aleatórios]
Edge context. All relations connecting a given entity are representative to that entity, while all entities linked with a given relation are also able to represent that relation. For example, a relation connected with “United Kingdom”, “France”, “China”, and “United States” is most likely to be “Nationality”. We define he edge context cE (si) of a subject si as all other subjects directly linked with si. When si is a vertex, cE (si) is a set of edges of si, while when si is an edge, cE (si) consists of all vertices connected with si.
[Todos as relações que saem de um nó ou todos os nós ligados por uma relação, não tem filtro]
[Somente estrutura, não considera a semântica das relações]
Context extension. To utilize other types of graph context, one could first define c(si) and the algorithm used to extract the context from the given knowledge graph G. After that, the remaining steps for knowledge representation learning would be exactly the same with other types of graph context. Thus, our framework is general and flexible to extend different types of graph context easily.
[A extensão poderia ser por dimensão contextual]
To utilize these three types of context, we combine them by jointly maximizing the objective functions
4 Experiments
We evaluate our proposed approach with two experiments: (1) triple classification (Bordes et al., 2013; Wang et al., 2014; Lin et al., 2015b), which determines whether a given triple is correct or not, and (2) link prediction (Wang et al., 2014; Xiao et al., 2016b), which aims to predict missing entities.
[Resolve tarefas de KGC - KG Completion]
Comentários
Postar um comentário
Sinta-se a vontade para comentar. CrÃticas construtivas são sempre bem vindas.