Pular para o conteúdo principal

Graph Representation Learning - Cap 4 - Multi-relational Data and Knowledge Graphs

Fonte: https://www.cs.mcgill.ca/~wlh/grl_book/files/GRL_Book-Chapter_4-Knowledge_Graphs.pdf
@article{
author={Hamilton, William L.},
title={Graph Representation Learning},
journal={Synthesis Lectures on Artificial Intelligence and Machine Learning},
volume={14},
number={3},
pages={1-159},
publisher={Morgan and Claypool}
}
 
shallow embedding approaches, where we learn a unique embedding for each node
 
knowledge graph completion: we are given a multi-relational graph G= ( V, E), where the edges are defined as tuples e = ( u, ri, v ) indicating the presence of a particular relation ri holding between two nodes. Such multi-relational graphs are often referred to as knowledge graphs, since we can interpret the tuple (u,
ri, v ) as specifying that a particular “fact” holds between the two nodes u and v.  
 
now we have to deal with the presence of multiple different types of edges. To address this complication, we augment our decoder to make it multi-relational. Instead of only taking a pair of node embeddings as input, we now define the decoder as accepting a pair of node embeddings as well as a relation type

In the multi-relational setting, we will also see a diversity of decoders and loss functions. However, nearly all multi-relational embedding methods simply define the similarity measure directly based on the adjacency tensor ... 
 
most multi-relational embedding methods were specifically designed for relation prediction.
 
One popular loss function that is both efficient and suited to our task is the cross-entropy loss with negative sampling.The other popular loss function used for multi-relational node embedding is the
margin loss
 
Since we are feeding the output of the decoder to a logistic function, we obtain normalized scores in [0, 1] that can be interpreted as probabilities.

 
 
One of the simplest and earliest approaches to learning multi-relational embeddings—often termed RESCAL—defined the decoder as [Nickel et al., 2011]
 

 
In the RESCAL decoder, we associate a trainable matrix with each relation. However, one limitation of this approach—and a reason why it is not often used—is its high computational and statistical cost for representing relations. There are O(d2) parameters for each relation type in RESCAL, which means that relations require an order of magnitude more parameters to represent, compared to entities.
 
Translational decoders: One popular class of decoders represents relations as translations in the embedding space. This approach was initiated by Bordes et al. [2013]’s TransE model, which defined the decoder as
 
In these approaches, we represent each relation using a d-dimensional embedding. The likelihood of an edge is proportional to the distance between the embedding of the head node and the tail node, after translating the head node according to the relation embedding. TransE is one of the earliest multi-relational decoders proposed and continues to be a strong baseline in many applications.
 
Multi-linear dot products: Rather than defining a decoder based upon translating embeddings, a second popular line of work develops multi-relational decoders by generalizing the dot-product decoder from simple graphs. In this approach—often termed DistMult and first proposed by Yang et al.—we define the decoder as:
 
One limitation of the DistMult decoder ... is that it can only encode symmetric relations.  This is a serious limitation as many relation types in multi-relational graphs are directed and asymmetric. 
 
To address this issue, Trouillon et al. [2016] proposed augmenting the DistMult encoder by employing complex-valued embeddings. They define the ComplEx as

 
 
 
Compositonality Lastly, we can consider whether or not the decoders can encode compositionality between relation representations of the form
 
 
For example, in TransE we can accommodate this ... We can similarly model compositionality in RESCAL .....

Comentários

Postagens mais visitadas deste blog

Connected Papers: Uma abordagem alternativa para revisão da literatura

Durante um projeto de pesquisa podemos encontrar um artigo que nos identificamos em termos de problema de pesquisa e também de solução. Então surge a vontade de saber como essa área de pesquisa se desenvolveu até chegar a esse ponto ou quais desdobramentos ocorreram a partir dessa solução proposta para identificar o estado da arte nesse tema. Podemos seguir duas abordagens:  realizar uma revisão sistemática usando palavras chaves que melhor caracterizam o tema em bibliotecas digitais de referência para encontrar artigos relacionados ou realizar snowballing ancorado nesse artigo que identificamos previamente, explorando os artigos citados (backward) ou os artigos que o citam (forward)  Mas a ferramenta Connected Papers propõe uma abordagem alternativa para essa busca. O problema inicial é dado um artigo de interesse, precisamos encontrar outros artigos relacionados de "certa forma". Find different methods and approaches to the same subject Track down the state of the art rese...

Knowledge Graph Embedding with Triple Context - Leitura de Abstract

  Jun Shi, Huan Gao, Guilin Qi, and Zhangquan Zhou. 2017. Knowledge Graph Embedding with Triple Context. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (CIKM '17). Association for Computing Machinery, New York, NY, USA, 2299–2302. https://doi.org/10.1145/3132847.3133119 ABSTRACT Knowledge graph embedding, which aims to represent entities and relations in vector spaces, has shown outstanding performance on a few knowledge graph completion tasks. Most existing methods are based on the assumption that a knowledge graph is a set of separate triples, ignoring rich graph features, i.e., structural information in the graph. In this paper, we take advantages of structures in knowledge graphs, especially local structures around a triple, which we refer to as triple context. We then propose a Triple-Context-based knowledge Embedding model (TCE). For each triple, two kinds of structure information are considered as its context in the graph; one is the out...

KnOD 2021

Beyond Facts: Online Discourse and Knowledge Graphs A preface to the proceedings of the 1st International Workshop on Knowledge Graphs for Online Discourse Analysis (KnOD 2021, co-located with TheWebConf’21) https://ceur-ws.org/Vol-2877/preface.pdf https://knod2021.wordpress.com/   ABSTRACT Expressing opinions and interacting with others on the Web has led to the production of an abundance of online discourse data, such as claims and viewpoints on controversial topics, their sources and contexts . This data constitutes a valuable source of insights for studies into misinformation spread, bias reinforcement, echo chambers or political agenda setting. While knowledge graphs promise to provide the key to a Web of structured information, they are mainly focused on facts without keeping track of the diversity, connection or temporal evolution of online discourse data. As opposed to facts, claims are inherently more complex. Their interpretation strongly depends on the context and a vari...