Pular para o conteúdo principal

edX @ From Graph to Knowledge Graph – Algorithms and Applications / Graph Representation Learning

Microsoft: DAT278x
From Graph to Knowledge Graph – Algorithms and Applications

Module 3: Graph Representation Learning

Representation learning problem

Basically, given an input graph we design and extract structure features for nodes and edges such as different centrality scores for nodes, various similarities scores for one pair of nodes on an link.
Basically, a hand-crafted feature matrix is created with expensive human and computing effort.

What is the issue here? 

First, we need to pre-design the features according to our domain knowledge
and graph mining experience. The quality of those mining task, therefore, largely depend on the hand-crafted features Second, to do so it will require significant human effort as well as, potentially, very expensive computational cost.

Ao invés de construir/modelar manualmente (engeneering) com o esforço de um humano, a ideia é aprender automaticamente (learning). 

To address these two issues, very recently, feature representation learning is proposed because its power to capture internal relations from rich, complex data.  Instead of hand crafting features from the  networks, graph representation learning aimed to learn latent feature matrix. By latent, here, we mean we really don't design or know the meaning of each element. Unsupervised representation learning
would enable automatic discovery of useful and meaningful latent features from the graph.

Formerly, the problem of representation learning in graphs is defined as follows.



The learning goal is to map each node into a latent low-dimensional space such that graphs structure information is encoded into distributional node representations.

Graph representation learning, aka, graph embedding or network embedding.

Skip-gram based graph embedding (DeepWalk & node2vec)

One of the important graph representation learning ideas is inspired by the word embedding techniques
in natural language processing. The basic assumption of word embedding is that geographically close words in a sentence or document contain interrelations in human natural language. Then the key idea of skip-gram based word embedding models is then to predict these words that are surrounding each other in the text corpus.

Como aplicar o word embedding usado em documentos em grafos?

How to transform the graph into a word document? How can we transform the nonlinear graph structure into linear structures?

In particular, notice that the graph structures are not linear, whereas the words in a document are linear in language. We can consider the sentence as a node path, with each node representing a word. We can transform the nonlinear graph structures into a linear node path by simply using random walks over the graph to generate node paths.

Escolher um nó de partida e gerar todos os caminhos possíveis a partir desse nó (DFS)

Understanding skip-gram based graph embedding I



Recall that this graph representation learning technique we introduce in previous section includes two steps. (1) Random walks over graph to generate node path (2) Skip gram model to apply to node path generated in the first step.

Basically, in word embedding, research found that the objective of the skip gram model with negative sampling is actually equivalent to matrix factorization. The matrix that is factorized is the PMI matrix,
short for pointwise mutual information matrix.

Understanding skip-gram based graph embedding II

Essentially, the graph representation learning framework we introduced before is, in theory, performing implicit matrix factorization.

Here we show a matrix factorization model called NetMF for graph representation learning. Basically, the main idea of this model is to explicitly factorize the closed-form matrix by using singular-value decomposition (SVD). It turns out that, by factorizing the matrix explicitly, the latent representation learned for each node is more effective for down-stream graph application, such as node clustering and classification.

Heterogeneous graph embedding I

The challenge here are then how to effectively preserve the concept of node context among multi-types of nodes in heterogeneous graph structures.

Meta-path in heterogeneous graph defines node path with different semantics. For example, in academic heterogeneous graph, a meta-path APA, short for Author Paper Author, represent two authors co-author on a paper. The goal is to generate node path that are able to capture both the semantic and structural correlations between different types of nodes in the heterogeneous graph. It will not randomly jump to all of its' neighbors, but only neighbors with a specific type as indicated by this meta-path.


Heterogeneous graph embedding II

How we can incorporate the heterogeneous node paths generated by meta-paths random walk into a skip-gram model? 

By ignoring node type information, to predict the context node with a specific type t, the skip-gram will always fail when the predicted context node have different types with the underlying context node type.

In heterogeneous skip-gram, the multinomial distribution dimension for type t nodes is determined by the number of t-type nodes in the heterogeneous graph. To optimize this model fast, we use the same technique used in homogeneous skip-gram.
That is negative sampling.
The difference lies in that when we sample negative nodes, we take the context node's type into consideration.
Finally, this heterogeneous skip-gram model is learned by using stochastic gradient descent algorithm.

Very nice examples with MAG

Graph convolutional network I

The success of deep neural networks is built upon the flexibility to transform information from layers to layers. Basically, we can take a fixed size of small grid, say a three times three grid, to combine  information from pixels of these nine node of this sub-grid. We then can swipe through the full grid
by moving the three times three sub-grid. We will move it to the next sub-grid we do the sum over all nine pixels again to pass the information into the second layer. Here is the convolution over the next sub-grid. One by one. To the end we can transform the input layer into a hidden layer by leveraging convolutions defined over the sub-grid.

How we define convolution over graphs with arbitrary structure and size? 

Graph convolutional network II

Basically, we are also given a graph with its Adjacency matrix A and its corresponding Degree matrix D. At the same time, each node may also associate with some features, which is represented in the Feature matrix X. However, note that the problem setting and the solution can work with or without the Feature matrix X. We need to find the function F with parameter W to transform graph structure and existing features into the first hidden layer in the neural networks.



It turns out that F could be any non-linear activity function.

To normalize Adjacency matrix A, we can leverage the Degree matrix D. In this way, the multiplication
with normalized Adjacency matrix A will average impact from the neighborhood.


Comentários

Postagens mais visitadas deste blog

Connected Papers: Uma abordagem alternativa para revisão da literatura

Durante um projeto de pesquisa podemos encontrar um artigo que nos identificamos em termos de problema de pesquisa e também de solução. Então surge a vontade de saber como essa área de pesquisa se desenvolveu até chegar a esse ponto ou quais desdobramentos ocorreram a partir dessa solução proposta para identificar o estado da arte nesse tema. Podemos seguir duas abordagens:  realizar uma revisão sistemática usando palavras chaves que melhor caracterizam o tema em bibliotecas digitais de referência para encontrar artigos relacionados ou realizar snowballing ancorado nesse artigo que identificamos previamente, explorando os artigos citados (backward) ou os artigos que o citam (forward)  Mas a ferramenta Connected Papers propõe uma abordagem alternativa para essa busca. O problema inicial é dado um artigo de interesse, precisamos encontrar outros artigos relacionados de "certa forma". Find different methods and approaches to the same subject Track down the state of the art rese...

Knowledge Graph Embedding with Triple Context - Leitura de Abstract

  Jun Shi, Huan Gao, Guilin Qi, and Zhangquan Zhou. 2017. Knowledge Graph Embedding with Triple Context. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (CIKM '17). Association for Computing Machinery, New York, NY, USA, 2299–2302. https://doi.org/10.1145/3132847.3133119 ABSTRACT Knowledge graph embedding, which aims to represent entities and relations in vector spaces, has shown outstanding performance on a few knowledge graph completion tasks. Most existing methods are based on the assumption that a knowledge graph is a set of separate triples, ignoring rich graph features, i.e., structural information in the graph. In this paper, we take advantages of structures in knowledge graphs, especially local structures around a triple, which we refer to as triple context. We then propose a Triple-Context-based knowledge Embedding model (TCE). For each triple, two kinds of structure information are considered as its context in the graph; one is the out...

KnOD 2021

Beyond Facts: Online Discourse and Knowledge Graphs A preface to the proceedings of the 1st International Workshop on Knowledge Graphs for Online Discourse Analysis (KnOD 2021, co-located with TheWebConf’21) https://ceur-ws.org/Vol-2877/preface.pdf https://knod2021.wordpress.com/   ABSTRACT Expressing opinions and interacting with others on the Web has led to the production of an abundance of online discourse data, such as claims and viewpoints on controversial topics, their sources and contexts . This data constitutes a valuable source of insights for studies into misinformation spread, bias reinforcement, echo chambers or political agenda setting. While knowledge graphs promise to provide the key to a Web of structured information, they are mainly focused on facts without keeping track of the diversity, connection or temporal evolution of online discourse data. As opposed to facts, claims are inherently more complex. Their interpretation strongly depends on the context and a vari...