Microsoft: DAT278x
From Graph to Knowledge Graph – Algorithms and Applications
Module 3: Graph Representation Learning
Representation learning problem
Basically, given an input graph we design and extract structure features for nodes and edges such as different centrality scores for nodes, various similarities scores for one pair of nodes on an link.
Basically, a hand-crafted feature matrix is created with expensive human and computing effort.
What is the issue here?
First, we need to pre-design the features according to our domain knowledge
and graph mining experience. The quality of those mining task, therefore, largely depend on the hand-crafted features Second, to do so it will require significant human effort as well as, potentially, very expensive computational cost.
Ao invés de construir/modelar manualmente (engeneering) com o esforço de um humano, a ideia é aprender automaticamente (learning).
To address these two issues, very recently, feature representation learning is proposed because its power to capture internal relations from rich, complex data. Instead of hand crafting features from the networks, graph representation learning aimed to learn latent feature matrix. By latent, here, we mean we really don't design or know the meaning of each element. Unsupervised representation learning
would enable automatic discovery of useful and meaningful latent features from the graph.
Formerly, the problem of representation learning in graphs is defined as follows.
The learning goal is to map each node into a latent low-dimensional space such that graphs structure information is encoded into distributional node representations.
Graph representation learning, aka, graph embedding or network embedding.
Skip-gram based graph embedding (DeepWalk & node2vec)
One of the important graph representation learning ideas is inspired by the word embedding techniques
in natural language processing. The basic assumption of word embedding is that geographically close words in a sentence or document contain interrelations in human natural language. Then the key idea of skip-gram based word embedding models is then to predict these words that are surrounding each other in the text corpus.
Como aplicar o word embedding usado em documentos em grafos?
How to transform the graph into a word document? How can we transform the nonlinear graph structure into linear structures?
In particular, notice that the graph structures are not linear, whereas the words in a document are linear in language. We can consider the sentence as a node path, with each node representing a word. We can transform the nonlinear graph structures into a linear node path by simply using random walks over the graph to generate node paths.
Escolher um nó de partida e gerar todos os caminhos possíveis a partir desse nó (DFS)
Understanding skip-gram based graph embedding I
Recall that this graph representation learning technique we introduce in previous section includes two steps. (1) Random walks over graph to generate node path (2) Skip gram model to apply to node path generated in the first step.
Basically, in word embedding, research found that the objective of the skip gram model with negative sampling is actually equivalent to matrix factorization. The matrix that is factorized is the PMI matrix,
short for pointwise mutual information matrix.
Understanding skip-gram based graph embedding II
Essentially, the graph representation learning framework we introduced before is, in theory, performing implicit matrix factorization.
Here we show a matrix factorization model called NetMF for graph representation learning. Basically, the main idea of this model is to explicitly factorize the closed-form matrix by using singular-value decomposition (SVD). It turns out that, by factorizing the matrix explicitly, the latent representation learned for each node is more effective for down-stream graph application, such as node clustering and classification.
Heterogeneous graph embedding I
The challenge here are then how to effectively preserve the concept of node context among multi-types of nodes in heterogeneous graph structures.
Meta-path in heterogeneous graph defines node path with different semantics. For example, in academic heterogeneous graph, a meta-path APA, short for Author Paper Author, represent two authors co-author on a paper. The goal is to generate node path that are able to capture both the semantic and structural correlations between different types of nodes in the heterogeneous graph. It will not randomly jump to all of its' neighbors, but only neighbors with a specific type as indicated by this meta-path.
Heterogeneous graph embedding II
How we can incorporate the heterogeneous node paths generated by meta-paths random walk into a skip-gram model?
By ignoring node type information, to predict the context node with a specific type t, the skip-gram will always fail when the predicted context node have different types with the underlying context node type.
In heterogeneous skip-gram, the multinomial distribution dimension for type t nodes is determined by the number of t-type nodes in the heterogeneous graph. To optimize this model fast, we use the same technique used in homogeneous skip-gram.
That is negative sampling.
The difference lies in that when we sample negative nodes, we take the context node's type into consideration.
Finally, this heterogeneous skip-gram model is learned by using stochastic gradient descent algorithm.
Very nice examples with MAG
Graph convolutional network I
The success of deep neural networks is built upon the flexibility to transform information from layers to layers. Basically, we can take a fixed size of small grid, say a three times three grid, to combine information from pixels of these nine node of this sub-grid. We then can swipe through the full grid
by moving the three times three sub-grid. We will move it to the next sub-grid we do the sum over all nine pixels again to pass the information into the second layer. Here is the convolution over the next sub-grid. One by one. To the end we can transform the input layer into a hidden layer by leveraging convolutions defined over the sub-grid.
How we define convolution over graphs with arbitrary structure and size?
Graph convolutional network II
Basically, we are also given a graph with its Adjacency matrix A and its corresponding Degree matrix D. At the same time, each node may also associate with some features, which is represented in the Feature matrix X. However, note that the problem setting and the solution can work with or without the Feature matrix X. We need to find the function F with parameter W to transform graph structure and existing features into the first hidden layer in the neural networks.
It turns out that F could be any non-linear activity function.
To normalize Adjacency matrix A, we can leverage the Degree matrix D. In this way, the multiplication
with normalized Adjacency matrix A will average impact from the neighborhood.
Comentários
Postar um comentário
Sinta-se a vontade para comentar. Críticas construtivas são sempre bem vindas.