Fonte: https://www.cs.mcgill.ca/~wlh/grl_book/files/GRL_Book-Chapter_4-Knowledge_Graphs.pdf
@article{
author={Hamilton, William L.},
title={Graph Representation Learning},
journal={Synthesis Lectures on Artificial Intelligence and Machine Learning},
volume={14},
number={3},
pages={1-159},
publisher={Morgan and Claypool}
}
author={Hamilton, William L.},
title={Graph Representation Learning},
journal={Synthesis Lectures on Artificial Intelligence and Machine Learning},
volume={14},
number={3},
pages={1-159},
publisher={Morgan and Claypool}
}
shallow embedding approaches, where we learn a unique embedding for each node
knowledge graph completion: we are given a multi-relational graph G= ( V, E), where the edges are defined as tuples e = ( u, ri, v ) indicating the presence of a particular relation ri holding between two nodes. Such multi-relational graphs are often referred to as knowledge graphs, since we can interpret the tuple (u, ri, v ) as specifying that a particular “fact” holds between the two nodes u and v.
now we have to deal with the presence of multiple different types of edges. To address this complication, we augment our decoder to make it multi-relational. Instead of only taking a pair of node embeddings as input, we now define the decoder as accepting a pair of node embeddings as well as a relation type
In the multi-relational setting, we will also see a diversity of decoders and loss functions. However, nearly all multi-relational embedding methods simply define the similarity measure directly based on the adjacency tensor ...
most multi-relational embedding methods were specifically designed for relation prediction.
One popular loss function that is both efficient and suited to our task is the cross-entropy loss with negative sampling.The other popular loss function used for multi-relational node embedding is the
margin loss
Since we are feeding the output of the decoder to a logistic function, we obtain normalized scores in [0, 1] that can be interpreted as probabilities.
One of the simplest and earliest approaches to learning multi-relational embeddings—often termed RESCAL—defined the decoder as [Nickel et al., 2011]
In the RESCAL decoder, we associate a trainable matrix with each relation. However, one limitation of this approach—and a reason why it is not often used—is its high computational and statistical cost for representing relations. There are O(d2) parameters for each relation type in RESCAL, which means that relations require an order of magnitude more parameters to represent, compared to entities.
Translational decoders: One popular class of decoders represents relations as translations in the embedding space. This approach was initiated by Bordes et al. [2013]’s TransE model, which defined the decoder as
In these approaches, we represent each relation using a d-dimensional embedding. The likelihood of an edge is proportional to the distance between the embedding of the head node and the tail node, after translating the head node according to the relation embedding. TransE is one of the earliest multi-relational decoders proposed and continues to be a strong baseline in many applications.
Multi-linear dot products: Rather than defining a decoder based upon translating embeddings, a second popular line of work develops multi-relational decoders by generalizing the dot-product decoder from simple graphs. In this approach—often termed DistMult and first proposed by Yang et al.—we define the decoder as:
One limitation of the DistMult decoder ... is that it can only encode symmetric relations. This is a serious limitation as many relation types in multi-relational graphs are directed and asymmetric.
To address this issue, Trouillon et al. [2016] proposed augmenting the DistMult encoder by employing complex-valued embeddings. They define the ComplEx as
Compositonality Lastly, we can consider whether or not the decoders can encode compositionality between relation representations of the form
For example, in TransE we can accommodate this ... We can similarly model compositionality in RESCAL .....
Comentários
Postar um comentário
Sinta-se a vontade para comentar. CrÃticas construtivas são sempre bem vindas.